US20050168358A1 - Method and apparatus for coded symbol stuffing in recording systems - Google Patents
Method and apparatus for coded symbol stuffing in recording systems Download PDFInfo
- Publication number
- US20050168358A1 US20050168358A1 US10/767,831 US76783104A US2005168358A1 US 20050168358 A1 US20050168358 A1 US 20050168358A1 US 76783104 A US76783104 A US 76783104A US 2005168358 A1 US2005168358 A1 US 2005168358A1
- Authority
- US
- United States
- Prior art keywords
- data
- separator
- rll
- blocks
- bits
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M5/00—Conversion of the form of the representation of individual digits
- H03M5/02—Conversion to or from representation by pulses
- H03M5/04—Conversion to or from representation by pulses the pulses having two levels
- H03M5/14—Code representation, e.g. transition, for a given bit cell depending on the information in one or more adjacent bit cells, e.g. delay modulation code, double density code
- H03M5/145—Conversion to or from block codes or representations thereof
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/14—Digital recording or reproducing using self-clocking codes
- G11B20/1403—Digital recording or reproducing using self-clocking codes characterised by the use of two levels
- G11B20/1423—Code representation depending on subsequent bits, e.g. delay modulation, double density code, Miller code
- G11B20/1426—Code representation depending on subsequent bits, e.g. delay modulation, double density code, Miller code conversion to or from block codes or representations thereof
Definitions
- the present invention relates to data storage and retrieval systems. More particularly, the present invention relates to a method and apparatus for coded symbol stuffing in optical and magnetic recording systems where run length limiting coding schemes are used.
- both data storage/retrieval systems and data transmission systems communicate information.
- Storage/retrieval systems communicate information through time, while data transmission systems communicate information through space.
- data storage and retrieval systems use a read/write head to communicate data to a corresponding one of substantially concentric tracks or channels in the media.
- data transmission systems similarly communicate data over channels in the transmission media or receive data from channels in the media.
- Storage/retrieval systems and data transmission systems often utilize encoding/decoding schemes for error detection, for privacy, and for data compression.
- Run Length Limiting One common type of coding scheme is referred to as Run Length Limiting. Run length limiting encoding schemes involve separating consecutive “1s” in a binary sequence by some predefined number of zeros. Coded data sequences having this property are referred to as Run Length Limited (RLL) codes.
- RLL Run Length Limited
- RLL coding schemes such as optical and magnetic storage systems, as well as in some communication systems, typically include an outer error correcting code (ECC), and a run length limiting (RLL) encoder.
- ECC outer error correcting code
- RLL run length limiting
- Such systems may also include an optional inner channel encoder. Data is encoded first by the ECC, then passed through the RLL encoder. If the optional inner channel encoder is used, the RLL encoded data is then passed through the inner channel encoder. The output of either the RLL encoded data or the inner channel encoded data can then be pre-coded before being recorded onto channels on the media.
- a Viterbi detector is used to reconstruct the coded bits from the channel; however, due to electronic and media noise in the channel, conventional detectors cannot recover the original data with an arbitrarily small error probability.
- an ECC decoder is used at the output of the read/write channel. Generally, the ECC decoder decreases the output Bit-Error Rate (BER) and the Sector Failure Rate (SFR) of the channel to the levels typically provided in the technical specifications for the implementing apparatus.
- BER Bit-Error Rate
- SFR Sector Failure Rate
- the RLL code typically facilitates the operation of the timing circuits.
- the RLL code shapes the spectrum of the signal and modifies the distance properties of the output code words of the channel. Since the RLL code effects both the shape of the signal spectrum and the distance properties of the output, the RLL code can be used to improve the BER and the SFR characteristics of the system.
- RLL coding schemes employ a state transition diagram.
- arbitrary user data (p) is encoded to constraint data (q) via a finite state machine, where p and q represent sequences of data objects, each containing two or more elements.
- the data rate of the encoder can be defined as p/q (“p divided by q”), provided that, at each stage of the encoding process, one P-object of user data (p) is encoded to one q-object of constraint data (q) in such a way that the concatenation of the encoded q-objects obeys the given constraint.
- the finite-state machine has multiple states, and the encoder or decoder moves from one state to another after the generation of each output object.
- a single error in the received sequence can trigger the generation of wrong states in the decoder, resulting in long sequences of errors. This phenomenon is referred to as error propagation.
- ISI inter-symbol interference
- the decoder can be implemented via a sliding-block decoder, which is a decoder having a decoding window of a fixed size.
- the encoded data sequence is decoded a portion at a time, such that recovery of a specific original bit involves only a portion of the received sequence.
- the portion of the received sequence being decoded is the portion of the sequence that falls within the decoding window of the specific bit.
- the decoding process can be considered as a sequence of decoding decisions where the decoding window “slides” over the sequence to be decoded.
- the sliding block decoder limits errors in the received sequence such that the error only influences the decoding decisions made within the window, thereby effecting only a limited number of recovered bits.
- the size of the decoding window of a sliding block decoder is relatively significant.
- the size of the window provides an upper bound to the expected amount of error propogation and it provides an indication of the complexity of the decoder (and the corresponding size of the decoder's hardware implementation).
- finite state encoders One technique for constructing finite state encoders is the state-splitting algorithm, which reduces design procedure a series of steps.
- the state-splitting algorithm works well for small and moderate values of p, but when p is relatively large, the state-splitting algorithm encounters simply too many possible assignments of data-to-codeword in the encoding graph, making design difficult. Moreover, a poor choice of assignments, given the complexity, could lead to a costly implementation.
- the implemented design should include the fewest possible number of states.
- the state-splitting algorithm does not directly solve the general problem of designing codes that achieve, for example, the minimum number of encoder states, the smallest sliding-block decoding window, or the minimum hardware complexity (a less precise valuation).
- a method of forming RLL coded data streams uses separator blocks to limit consecutive zeros to a predetermined maximum.
- An input code word is divided into data portions and a separator portion. Each data portion is inserted into an output codeword without encoding and separated from a next data portion by a space.
- the separator portion is encoded into non-zero separator sub-matrices, which are stuffed into the spaces between the data portions.
- the separator portion and the data portions may be separately permuted without violating a constraint on consecutive zeros in the output.
- FIG. 1 is a block diagram illustrating a read channel architecture in which the embodiments of the present invention can be implemented.
- FIG. 2 is a generic separator matrix used by the m/n encoder to generate nonzero separators stuffed in the uncoded data stream.
- FIG. 3 is an illustrative example of the separator matrix used by the m/n encoder to generate nonzero separators stuffed in the uncoded data stream.
- FIG. 4 is the block diagram illustrating the principle of the coded bit stuffing providing the k-constraint.
- FIG. 5 is the block diagram illustrating operations of the RLL encoder and permuter.
- FIG. 6 is the block diagram illustrating the principle of the coded bit stuffing combined with data interleaving (permutation).
- BER bit error rates
- SNR signal to noise ratio
- FIG. 14 is a graph of the sector failure rates versus SNR for conventional RLL codes and for stuffed RLL codes of the present invention in a longitudinal channel with 50% jitter and 50% electronic noise.
- FIG. 1 illustrates a read/write channel of magnetic and/or optical disc drives.
- the system 10 reads and writes data to an inner subchannel 12 of the magnetic and/or optical disc of the disc drive.
- the system has an Reed-Solomon (RS) Error Correction Code (ECC) encoder 14 , a Run Length Limited (RLL) encoder 16 , a channel encoder(s) 18 , an Interleaver precoder 20 , head media 22 , front end and timing element 24 , a channel detector 26 , an outer decoder 28 , an RLL decoder 30 and an RS ECC decoder 32 .
- RS Reed-Solomon
- RLL Run Length Limited
- RS codes are linear block codes, meaning that they can be processed all at once as blocks of data.
- the RS algorithm takes data words, splits them up into code words, and adds redundant parity bytes in order to correct symbol errors.
- RS codes are written as RS(N, K) where there are k data symbols in each n symbol codeword. This means that there are N ⁇ K parity symbols in each codeword and the RS algorithm can correct up to (N ⁇ K)/2 symbol errors in each codeword.
- the RLL codes are codes that are limited by the number of flux changes that can be written in a given amount of disc area.
- RLL techniques limit the distance (run length) between magnetic flux reversals on the discs surface.
- the RLL coding technique defines the size of the data block that can be written within a given amount of disc space.
- the RS ECC encoder 14 encodes the data and passes the RS encoded data to an RLL encoder 16 .
- the RLL encoder encodes the RS encoded data and passes the RLL encoded data to a channel encoder 18 .
- the channel encoder 18 encodes the RS encoded data for the channel 12 and passes the encoded data to an interleaver precoder 20 , which reorders the coded data.
- the head media 22 writes the data to the inner subchannel 12 .
- Encoded data is read from the inner subchannel 12 of the disc by the heads media block 22 .
- the encoded data is then processed by some analog filters, such as a preamp (preliminary amplifier), a low pass filter (LPF) and other elements, a process that is sometimes referred to as “front-end processing”.
- the filtered signal is then sampled using timing circuitry.
- the filtering and sampling elements are indicated by the front end and timing element 24 .
- the data is then passed to a channel detector 26 and to an outer decoder 28 .
- the outer decoder 28 decodes the encoded data into RLL encoded data.
- the RLL encoded data is then decoded by the RLL decoder 30 and passed to an RS ECC decoder 32 for decoding into the originally transmitted data.
- the read/write channels of magnetic and/or optical disc drives include a number of different encoding/decoding circuits, each encoding or decoding data in different manners for different purposes.
- the various circuits shown in the blocks of FIG. 1 can be implemented as integrated circuits, discrete components, or suitably programmed processing circuitry. Additionally, while the discussion has included references to head media, other devices may be utilized to write data to a channel and to read data from the channel, such as a transceiver.
- the system 10 utilizes a combinatorial object called a separator matrix S, for stuffing separator bits between coded bits of the input data word.
- the separator matrix S is a matrix of size L ⁇ m and consists of binary elements 0 and 1.
- FIG. 2 shows the generic structure of the separator matrix S.
- the matrix S is partitioned by l ⁇ 1 boundaries 34 into l submatrices S 1 , S 2 , . . . , S l of size L ⁇ n 1 , L ⁇ n 2 , . . . , L ⁇ n 2 , respectively.
- the matrix S is called the (v 0 , v 1 , . . . , v l )-separator matrix if 1) each submatrix S i , of 1 ⁇ i ⁇ l consists of nonzero rows, or in other words each row of S consists of l nonzero separators s 1 , s 2 , . . .
- each line of S has at most v 0 consecutive leading zeros, and at most v l consecutive trailing zeros; and 3) each line of S has at most v i consecutive leading zeros at the i-th boundary 1 ⁇ i ⁇ l ⁇ 1.
- the total number of consecutive zeros at the left and right sides of the i-th dotted vertical line is not greater than v i , and this inequality is satisfied at all l ⁇ 1 boundaries and for all rows of the separator matrix.
- FIG. 3 illustrates an embodiment of the separator matrix S.
- a generic RLL encoding scheme using the (v 0 , v 1 , . . . , v l )-separator is constructed as follows. First, the input data word consisting of N bits is split into l+2 parts D 0 , D 1 , . . . , D l+1 of lengths N 0 , N 1 , . . . , N l+1 , respectively. The first l+1 data parts D 0 , D 1 , . . . , D l+1 are placed directly into the output code word without encoding as shown in FIG. 4 .
- the last data part D l+1 is sent to the m/n encoder 36 that converts m input bits into one of the rows of the separator matrix S.
- the output of the m/n encoder is a binary word of length n. It consists of l separators s 1 , s 2 , . . . , s l of length n 1 , n 2 , . . . , n l , respectively, and created by the boundary lines 34 shown in FIG. 2 .
- the output of the m/n encoder is split into parts, which are then stuffed between the first l+1 data parts in the output code word.
- the resulting n coded bits satisfy the k-constraint, which requires no more than k consecutive zeros in the bit sequence.
- the maximum number of consecutive zeros between any two consecutive ones in a coded sequence is not greater than 1) N l +v 0 +v l+1 +N 0 (at the left and right boundaries of the codeword); 2) v i +N i (between separator components s i and s i+1 , 1 ⁇ i ⁇ l); or 3) k 0 (the maximum number of consecutive zeros within the separators s i ).
- k 0 is less than n i ⁇ 2, where i is greater than 1 and less than l (e.g. 1 ⁇ i ⁇ l).
- the constraint k 0 is usually defined only by the first two conditions given above.
- bit errors are contained between separator blocks. More specifically, bit errors are prevented from propagating throughout the bit sequence, thereby minimizing transmission errors by limiting them to a particular block. In this instance, an error in encoded block s 1 would effect only the subblock s i , but not the data blocks D i .
- FIG. 5 illustrates an embodiment of the system 10 having an RLL encoder 16 , an Interleaver (permuter) 20 , and a channel 38 .
- input word ⁇ right arrow over (x) ⁇ is passed to the RLL encoder 16 .
- the RLL encoder 16 encodes the input word ⁇ right arrow over (x) ⁇ into an RLL encoded word ⁇ right arrow over (y) ⁇ .
- the RLL encoded word ⁇ right arrow over (y) ⁇ is then passed to the Interleaver (permuter) 20 , which produces an output word ⁇ right arrow over (z) ⁇ .
- the output word ⁇ right arrow over (z) ⁇ is then passed onto the channel 38 .
- the channel 38 may be the inner subchannel 12 (as shown in FIG. 1 ), a communication link between a transmitter and a receiver, or any kind of communication or transmission channel, including magnetic recording channels.
- the permuter 20 changes the positions of the components in ⁇ right arrow over (y) ⁇ in a random manner to facilitate the operation of iterative detection scheme shown in FIG. 1 . Additionally, the permuter 20 must preserve the operation of the RLL encoder 16 . Specifically, the permuter 20 operates in the subblocks rather than on the entire codeword all at once. In this way, the resulting output sequence ⁇ right arrow over (z) ⁇ satisfies the same or similar constraints as its input sequence ⁇ right arrow over (y) ⁇ .
- K data bits 40 are passed through permuter A 42 (for the first l data bits) and through permuter B 44 (for the l+1 data bit) to produce the output codeword 46 .
- the permuter A 42 swaps data blocks D i
- permuter B 44 swaps the separators s i .
- none of these swapping operations changes the k-constraint provided by the RLL encoder 16 .
- the k-constraint is preserved in the proposed scheme combining the RLL encoder 16 and permuter 20 .
- a number of pseudo-random and structured permuters are capable of separately shifting the data blocks and the separator blocks in an encoded bit sequence, so as to encode the signal without unwanted shuffling of the coded bits.
- the technique can be used in the magnetic recording channels, and other storage and communication systems facilitating the operations of the iterative detection schemes with soft decisions, such as Turbo codes, Low Density Parity Check (LDPC) codes and Turbo-Product Codes (TPC).
- LDPC Low Density Parity Check
- TPC Turbo-Product Codes
- FIG. 7 shows a (0,12) RLL code with rate 32/33.
- the input word of the RLL encoder consists of four bytes and is encoded into a code word of length 33.
- the first four data parts are directly placed in the output code word without encoding.
- the last 7 data bits D 4 are sent to the 7/8 encoder 36 .
- the first twenty four data bits are placed in the output code word without encoding at positions 0-3, 7-14, 18-25 and 29-33, respectively.
- the separators s 1 , s 2 , s 3 are inserted between the uncoded data parts D 0 , D 1 , D 2 , and D 3 at positions 4-6, 15-17, and 26-28, respectively.
- the maximum number of consecutive zeros in the coded bit stream is not greater than 12.
- data block D 0 has 4 bits (4 possible zeros) and separator bit s 1 has 3 bits (2 possible zeros).
- separator bit s 1 has 3 bits (2 possible zeros).
- data block D 3 has 5 bits (5 possible zeros) and separator block s 3 has 2 bits (1 possible zero).
- the maximum number of consecutive zeros at the boundaries is 12.
- the maximum number of consecutive zeros is also 12, corresponding to the 2 possible zeros from each of s 1 and s 2 , and the eight bits (8 possible zeros) associated with data block D 1 .
- the 7/8 encoder/decoder can be implemented based on simple integer arithmetic.
- the integer J satisfies the following inequality: ⁇ J ⁇ 127+ ⁇ 147.
- the decoding algorithm for the 7/8 decoder is constructed as follows.
- the 7/8 encoder/decoder described below effectively suppresses error propagation.
- To encode data using the 7/8 encoder 36 first the input data bits are split into three parts (a, b, and c) of lengths 3 bits, 3 bits and 1 bit, respectively.
- the input of the 7/8 encoder is a, b, and c.
- the output 8 bits are also represented by three parts (s 1 , s 2 , and s 3 ) of length 3 bits, 3 bits, and 2 bits.
- the output 8 bits are calculated as follows.
- the 7/8 encoder/decoder 36 divides the input codeword into three parts (s 1 , s 2 , and s 3 ) having 3 bits, 3 bits and 2 bits, respectively.
- the output codewords of the decoder 36 consist of three parts â, ⁇ circumflex over (b) ⁇ and ⁇ of lengths 3 bits, 3 bits and 1 bit, respectively.
- the output codewords are calculated as follows.
- FIG. 8 a (0,13) RLL code with rate 48/49 is illustrated.
- the input codeword of the encoder 36 consists of 48 bits, and is encoded into an output codeword of length 49.
- the first five data parts D 0 , . . . ,D 4 are placed directly in the output codeword without encoding at bit positions 0-4, 8-16, 20-28, 32-41, and 44-48, respectively.
- the last 10 data bits illustrated as D 5 are passed to the 10/11 encoder, which uses D 5 to produce an output consisting of four separators, s 1 , s 2 , s 3 , and s 4 .
- Each separator s 1 , s 2 , and s 3 takes one of the following seven nonzero values: (0,0,1); (0,1,0); (0,1,1); (1,0,0); (1,0,1); (1,1,0); and (1,1,1).
- These codewords form the rows of the separator matrix of size 1029 ⁇ 11 with the following parameters:
- the maximum number of consecutive zeros in the coded bit stream is not greater than 13.
- data block D 0 has 5 bits (5 possible zeros) and separator block s 1 has 3 bits (2 possible zeros).
- separator block s 4 has 2 bits (1 possible zero).
- the maximum number of consecutive zeros at the boundaries is 13.
- the maximum number of consecutive zeros is 13, corresponding to the 2 possible zeros from each of s 1 and s 2 , and the 9 bits (9 possible zeros) associated with data block D 1 .
- the maximum number of consecutive zeros is 13, corresponding to the 2 possible zeros from each of s 2 and s 3 , and the 9 bits (9 possible zeros) associated with D 2 .
- Each codeword consists of four nonzero separators (s 1 , s 2 , s 3 , and s 4 ), and 1024 of these words can be used to encode the 10 data bits as follows.
- parameter ⁇ it is possible to construct 5 different versions of the encoder as follows.
- Encoding step 2 (10/11 Encoder).
- Encoding step 3 (10/11 Encoder).
- Encoding step 4 (10/11 Encoder).
- the 10/11 encoder produces separator blocks for insertion into the output codeword.
- the separator blocks serve to limit error propagation during transmission.
- the system 10 divides the input data bits into four parts (a, b, c, and d).
- the output bits are calculated as follows.
- s 1 and s 2 are defined by d according to the following ment Table 3 TABLE 3 s 1 s 2 d 1 1 1 1 0 1 0 1 1 1 0 1 1 1 1 1 1
- Decoding of a received encoded signal operates similar to the decoding of a 7/8 encoded signal as described with respect to FIG. 7 above.
- the 10/11 encoder/decoder 36 divides the input codeword into four parts (s 1 , s 2 , s 3 , and s 4 ) having 3 bits, 3 bits, 3 bits, and 2 bits, respectively.
- the output codewords of the decoder 36 consist of four parts â, ⁇ circumflex over (b) ⁇ , ⁇ and ⁇ circumflex over (d) ⁇ of lengths 3 bits, 3 bits, 3 bits, and 1 bit, respectively.
- the output words are calculated as follows.
- the system 10 preserves the main portion of the data bits “as is”, encoding only a part of the data bits for use as separator blocks.
- the main portion of the data bits can be permuted by the channel interleaver 20 , arbitrarily, and without violating the k-constraint.
- the channel interleaver 20 may also permute the separator blocks formed from the part of the data bits without violating the k-constraint.
- the channel interleaver 20 may permute the data bits and the separator blocks without altering the maximum number of consecutive zeros defined by the k-constraint.
- the system 10 may be utilized with various types of iterative detection schemes based on turbo codes, LDPC codes, and other similar coding schemes.
- the separator blocks limit error propagation.
- FIG. 9 shows a graph of the bit error rates (BERs) before and after the RLL decoder of the present invention and before Error Correction Coding.
- BERs bit error rates
- the system 10 Using a code rate of 48/49 with a k-constraint of 13, and a GPR target of length 5 and an ND of 2.5 (and no jitter), the system 10 provided an output code before and after the RLL decoder with a very small error propagation. Specifically, the before and after RLL lines are almost on top of one another.
- FIG. 10 illustrates the BERs of the same RLL code before and after decoding with a conventional RLL decoder. As shown, bit errors are propagated by conventional RLL decoders. Specifically, the before RLL and after RLL graph lines do not overlap, and the after RLL decoder graph line shows a higher bit error rate than the line showing the signal before the RLL decoder is used.
- FIG. 11 illustrates the conventional RLL code as compared with the RLL code of the present invention.
- Both the conventional and the new RLL code have a code rate of 48/49 and a k-constraint of 13.
- the new RLL code of the present invention has a better sector failure rate than the conventional RLL code, particularly at higher signal to noise ratios.
- the new RLL code has a sector failure rate of 2 ⁇ 10 ⁇ 8 as compared to 8.5 ⁇ 10 ⁇ 8 of the conventional RLL code. In data storage and retrieval systems, such error rate improvements are significant.
- FIGS. 12 and 13 illustrate the BERs of the new RLL code as compared with the conventional RLL code, both before and after the RLL decoder.
- both the new and the conventional RLL codes tested had a code rate of 48/49, a k-constraint of 13, a GPR target of length 5, a ND of 2.5, and 50/50 jitter.
- the BERs before and after RLL decoding are almost identical, indicating that there is very little error propagation during the decoding process.
- FIG. 13 by contrast, there is a visible difference between the graphs showing before and after RLL decoding. Specifically, at every data point, the line showing after RLL decoding is visibly worse than the graph of the before RLL decoding.
- the bit error rate is 2.5 ⁇ 10 ⁇ 2 before the RLL decoder versus 3 ⁇ 10 ⁇ 2 after the RLL decoder.
- FIG. 14 illustrates the sector failure rates of the conventional RLL code versus the new RLL code of the present invention.
- the code rate was 48/49 and the k-constraint was 13.
- the data was written in a longitudinal channel with a GPR target of length 5.
- Noise consisted of 50% jitter and 50% electronic noise.
- the sector failure rates are approximately the same between the conventional RLL code and the RLL code of the present invention, and at high signal to noise ratios (such as 20), then RLL code of the present invention slightly out-performs the conventional RLL code. It is significant that the RLL code of the present invention performs as well as the conventional code with respect to the sector failure rate, since noticeable degradation in the sector failure rate would be intolerable.
- FIGS. 15 and 16 illustrate the BERs of the new RLL code as compared with the conventional RLL code, before and after decoding using the RLL decoder in perpendicular magnetic recording.
- the RLL code had a code rate of 48/49 and a k-constraint of 13.
- FIG. 15 once again demonstrates a very small error propagation between the before and after graphs of the invented code.
- FIG. 16 shows a visible separation between the graphs of the BER before and after RLL decoding.
- FIG. 17 illustrates the sector failure rate of the conventional RLL codes and the RLL codes of the present invention, with a code rate of 48/49 and a k-constraint of 13.
- the sector failure rates are approximately equal, with the RLL code of the present invention slightly outperforming the conventional RLL code at a signal to noise ratio between 18.5 and 19.5.
- the code stuffing technique of the present invention while described with respect to 7/8 and 48/49 encoders, may be implemented using other code rates and with different encoders. Regardless of the encoder and the code rate, the system 10 divides the input codeword into parts and encodes one of the parts to form separator bits for stuffing between the remaining, unencoded parts of the input code word to form an output code word. This allows for the use of Interleavers/permuters for randomly reordering the bits, without effecting the k-constraint and without allowing errors to propagate throughout the output codeword.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Error Detection And Correction (AREA)
Abstract
Description
- None.
- The present invention relates to data storage and retrieval systems. More particularly, the present invention relates to a method and apparatus for coded symbol stuffing in optical and magnetic recording systems where run length limiting coding schemes are used.
- Generally, both data storage/retrieval systems and data transmission systems communicate information. Storage/retrieval systems communicate information through time, while data transmission systems communicate information through space. Typically, data storage and retrieval systems use a read/write head to communicate data to a corresponding one of substantially concentric tracks or channels in the media. Using various modulation techniques, data transmission systems similarly communicate data over channels in the transmission media or receive data from channels in the media. Storage/retrieval systems and data transmission systems often utilize encoding/decoding schemes for error detection, for privacy, and for data compression.
- One common type of coding scheme is referred to as Run Length Limiting. Run length limiting encoding schemes involve separating consecutive “1s” in a binary sequence by some predefined number of zeros. Coded data sequences having this property are referred to as Run Length Limited (RLL) codes.
- Conventional systems that utilize RLL coding schemes, such as optical and magnetic storage systems, as well as in some communication systems, typically include an outer error correcting code (ECC), and a run length limiting (RLL) encoder. Such systems may also include an optional inner channel encoder. Data is encoded first by the ECC, then passed through the RLL encoder. If the optional inner channel encoder is used, the RLL encoded data is then passed through the inner channel encoder. The output of either the RLL encoded data or the inner channel encoded data can then be pre-coded before being recorded onto channels on the media.
- Typically, at the detection side, a Viterbi detector is used to reconstruct the coded bits from the channel; however, due to electronic and media noise in the channel, conventional detectors cannot recover the original data with an arbitrarily small error probability. To correct errors after the coded bits are reconstructed by the Viterbi detector, an ECC decoder is used at the output of the read/write channel. Generally, the ECC decoder decreases the output Bit-Error Rate (BER) and the Sector Failure Rate (SFR) of the channel to the levels typically provided in the technical specifications for the implementing apparatus.
- It is known that the RLL code typically facilitates the operation of the timing circuits. At the same time, the RLL code shapes the spectrum of the signal and modifies the distance properties of the output code words of the channel. Since the RLL code effects both the shape of the signal spectrum and the distance properties of the output, the RLL code can be used to improve the BER and the SFR characteristics of the system.
- Conventional RLL coding schemes employ a state transition diagram. In a finite state encoder, arbitrary user data (p) is encoded to constraint data (q) via a finite state machine, where p and q represent sequences of data objects, each containing two or more elements. The data rate of the encoder can be defined as p/q (“p divided by q”), provided that, at each stage of the encoding process, one P-object of user data (p) is encoded to one q-object of constraint data (q) in such a way that the concatenation of the encoded q-objects obeys the given constraint.
- The finite-state machine has multiple states, and the encoder or decoder moves from one state to another after the generation of each output object. A single error in the received sequence can trigger the generation of wrong states in the decoder, resulting in long sequences of errors. This phenomenon is referred to as error propagation.
- It is expected that the received sequence to be decoded will not be identical to the transmitted sequence due to a variety of factors, such as inter-symbol interference (ISI), damage to the storage medium, noise, and the like.
- These factors lead to errors in the decoded sequence, and the decoder must account for these errors. For the purpose of limiting error propagation, the decoder can be implemented via a sliding-block decoder, which is a decoder having a decoding window of a fixed size. The encoded data sequence is decoded a portion at a time, such that recovery of a specific original bit involves only a portion of the received sequence. Specifically, the portion of the received sequence being decoded is the portion of the sequence that falls within the decoding window of the specific bit. Thus, the decoding process can be considered as a sequence of decoding decisions where the decoding window “slides” over the sequence to be decoded. The sliding block decoder limits errors in the received sequence such that the error only influences the decoding decisions made within the window, thereby effecting only a limited number of recovered bits.
- The size of the decoding window of a sliding block decoder is relatively significant. The size of the window provides an upper bound to the expected amount of error propogation and it provides an indication of the complexity of the decoder (and the corresponding size of the decoder's hardware implementation).
- One technique for constructing finite state encoders is the state-splitting algorithm, which reduces design procedure a series of steps. As a design technique, the state-splitting algorithm works well for small and moderate values of p, but when p is relatively large, the state-splitting algorithm encounters simply too many possible assignments of data-to-codeword in the encoding graph, making design difficult. Moreover, a poor choice of assignments, given the complexity, could lead to a costly implementation. In practice, the implemented design should include the fewest possible number of states. However, the state-splitting algorithm does not directly solve the general problem of designing codes that achieve, for example, the minimum number of encoder states, the smallest sliding-block decoding window, or the minimum hardware complexity (a less precise valuation).
- Recently, various types of iterative detection and decoding schemes were developed for use in data storage and data communication systems, based on turbo codes, Low Density Parity Check (LDPC) codes, and turbo product codes. These types of codes provide very low BERs, but they usually require the use of an interleaver after the RLL and optional inner channel encoder(s). An interleaver changes the order of the bits in a sequence. By processing the already encoded bits with an interleaver, the interleaver changes the order of the already encoded bits, which can effectively nullify the operation of the RLL encoder. Since encoders based on finite-state machines transform all (or almost all) data bits while generating the output code words, the use of such codes in channels with interleaving coded bits is virtually impossible, or at least severely restricted.
- A method of forming RLL coded data streams uses separator blocks to limit consecutive zeros to a predetermined maximum. An input code word is divided into data portions and a separator portion. Each data portion is inserted into an output codeword without encoding and separated from a next data portion by a space. The separator portion is encoded into non-zero separator sub-matrices, which are stuffed into the spaces between the data portions. The separator portion and the data portions may be separately permuted without violating a constraint on consecutive zeros in the output.
-
FIG. 1 is a block diagram illustrating a read channel architecture in which the embodiments of the present invention can be implemented. -
FIG. 2 is a generic separator matrix used by the m/n encoder to generate nonzero separators stuffed in the uncoded data stream. -
FIG. 3 is an illustrative example of the separator matrix used by the m/n encoder to generate nonzero separators stuffed in the uncoded data stream. -
FIG. 4 is the block diagram illustrating the principle of the coded bit stuffing providing the k-constraint. -
FIG. 5 is the block diagram illustrating operations of the RLL encoder and permuter. -
FIG. 6 is the block diagram illustrating the principle of the coded bit stuffing combined with data interleaving (permutation). -
FIG. 7 is the block diagram illustrating the implementation of the 32/33 RLL code with k=12 constructed by stuffing of 3 separators in the data stream. -
FIG. 8 is the block diagram illustrating the implementation of the 48/49 RLL code with k=13 constructed by stuffing of 4 separators in the data stream. -
FIG. 9 is a graph of the bit error rates (BER) versus signal to noise ratio (SNR) of a stuffed 48/49 RLL code having a k-constraint of 13 both before and after RLL decoding with ND=2.5 and no jitter. -
FIG. 10 is a graph of the BER versus SNR of a conventional 48/49 RLL code having a k-constraint of 13 both before and after RLL decoding with ND=2.5 and no jitter. -
FIG. 11 is a graph of the sector failure rates versus SNR for conventional RLL codes and for stuffed RLL codes of the present invention in a channel with ND=2.5 and no jitter. -
FIG. 12 is a graph of the BER versus SNR of a stuffed 48/49 RLL code having a k-constraint of 13 both before and after RLL decoding with ND=2.5 and 50% jitter. -
FIG. 13 is a graph of the BER versus SNR of a conventional 48/49 RLL code having a k-constraint of 13 both before and after conventional RLL decoding with ND=2.5 and 50% jitter. -
FIG. 14 is a graph of the sector failure rates versus SNR for conventional RLL codes and for stuffed RLL codes of the present invention in a longitudinal channel with 50% jitter and 50% electronic noise. -
FIG. 15 is a graph of the BER versus SNR of the stuffed 48/49 RLL code in a PR2 channel with ND=2.0 and no jitter. -
FIG. 16 is a graph of the BER versus SNR of the conventional RLL code in a PR2 channel with ND=2.0 and no jitter. -
FIG. 17 is a graph of the sector failure rates versus SNR for conventional RLL codes and for stuffed RLL codes of the present invention in a PR2 channel with ND=2.0 and no jitter. - While the above-identified illustrations set forth preferred embodiments, other embodiments of the present invention are also contemplated, some of which are noted in the discussion. In all cases, this disclosure presents the illustrated embodiments of the present invention by way of representation and not limitation. Numerous other minor modifications and embodiments can be devised by those skilled in the art which fall within the scope and spirit of the principles of this invention.
-
FIG. 1 illustrates a read/write channel of magnetic and/or optical disc drives. As shown, thesystem 10 reads and writes data to aninner subchannel 12 of the magnetic and/or optical disc of the disc drive. The system has an Reed-Solomon (RS) Error Correction Code (ECC)encoder 14, a Run Length Limited (RLL)encoder 16, a channel encoder(s) 18, anInterleaver precoder 20,head media 22, front end andtiming element 24, achannel detector 26, anouter decoder 28, anRLL decoder 30 and anRS ECC decoder 32. - Generally, RS codes are linear block codes, meaning that they can be processed all at once as blocks of data. The RS algorithm takes data words, splits them up into code words, and adds redundant parity bytes in order to correct symbol errors. RS codes are written as RS(N, K) where there are k data symbols in each n symbol codeword. This means that there are N−K parity symbols in each codeword and the RS algorithm can correct up to (N−K)/2 symbol errors in each codeword.
- The RLL codes are codes that are limited by the number of flux changes that can be written in a given amount of disc area. In other words, RLL techniques limit the distance (run length) between magnetic flux reversals on the discs surface. By limiting the run length, the RLL coding technique defines the size of the data block that can be written within a given amount of disc space.
- When data is presented to the
system 10 for transmission over theinner subchannel 12, theRS ECC encoder 14 encodes the data and passes the RS encoded data to anRLL encoder 16. The RLL encoder encodes the RS encoded data and passes the RLL encoded data to achannel encoder 18. Thechannel encoder 18 encodes the RS encoded data for thechannel 12 and passes the encoded data to aninterleaver precoder 20, which reorders the coded data. Finally, thehead media 22 writes the data to theinner subchannel 12. - Encoded data is read from the
inner subchannel 12 of the disc by theheads media block 22. The encoded data is then processed by some analog filters, such as a preamp (preliminary amplifier), a low pass filter (LPF) and other elements, a process that is sometimes referred to as “front-end processing”. The filtered signal is then sampled using timing circuitry. InFIG. 1 , the filtering and sampling elements are indicated by the front end andtiming element 24. - The data is then passed to a
channel detector 26 and to anouter decoder 28. Theouter decoder 28 decodes the encoded data into RLL encoded data. The RLL encoded data is then decoded by theRLL decoder 30 and passed to anRS ECC decoder 32 for decoding into the originally transmitted data. - The read/write channels of magnetic and/or optical disc drives include a number of different encoding/decoding circuits, each encoding or decoding data in different manners for different purposes. The various circuits shown in the blocks of
FIG. 1 can be implemented as integrated circuits, discrete components, or suitably programmed processing circuitry. Additionally, while the discussion has included references to head media, other devices may be utilized to write data to a channel and to read data from the channel, such as a transceiver. - In general, the
system 10 utilizes a combinatorial object called a separator matrix S, for stuffing separator bits between coded bits of the input data word. The separator matrix S is a matrix of size L×m and consists ofbinary elements
n1,n2, . . . ,n1, and
v0,v1, . . . ,v1.
where n=n1+n2+ . . . +nl. -
FIG. 2 shows the generic structure of the separator matrix S. As shown, the matrix S is partitioned by l−1boundaries 34 into l submatrices S1, S2, . . . , Sl of size L×n1, L×n2, . . . , L×n2, respectively. The matrix S is called the (v0, v1, . . . , vl)-separator matrix if 1) each submatrix Si, of 1≦i<l consists of nonzero rows, or in other words each row of S consists of l nonzero separators s1, s2, . . . , sl; 2) each line of S has at most v0 consecutive leading zeros, and at most vl consecutive trailing zeros; and 3) each line of S has at most vi consecutive leading zeros at the i-th boundary 1≦i<l−1. Specifically, in each row of the matrix S, the total number of consecutive zeros at the left and right sides of the i-th dotted vertical line is not greater than vi, and this inequality is satisfied at all l−1 boundaries and for all rows of the separator matrix. -
FIG. 3 illustrates an embodiment of the separator matrix S. In this embodiment, the separator matrix S has the following parameters:
M=8, n=6, l=3,
where M is the number of rows, n is the number of columns, and l is the number of separators in each row. Each code word (n1, n2, and n3) has two bits, and v0=v1=v2=v3=1. - A generic RLL encoding scheme using the (v0, v1, . . . , vl)-separator is constructed as follows. First, the input data word consisting of N bits is split into l+2 parts D0, D1, . . . , Dl+1 of lengths N0, N1, . . . , Nl+1, respectively. The first l+1 data parts D0, D1, . . . , Dl+1 are placed directly into the output code word without encoding as shown in
FIG. 4 . - The last data part Dl+1 is sent to the m/
n encoder 36 that converts m input bits into one of the rows of the separator matrix S. Thus, the output of the m/n encoder is a binary word of length n. It consists of l separators s1, s2, . . . , sl of length n1, n2, . . . , nl, respectively, and created by theboundary lines 34 shown inFIG. 2 . - Finally, the output of the m/n encoder is split into parts, which are then stuffed between the first l+1 data parts in the output code word. The resulting n coded bits satisfy the k-constraint, which requires no more than k consecutive zeros in the bit sequence.
- With this encoding technique, the maximum number of consecutive zeros between any two consecutive ones in a coded sequence is not greater than 1) Nl+v0+vl+1+N0 (at the left and right boundaries of the codeword); 2) vi+Ni (between separator components si and si+1, 1≦i<l); or 3) k0 (the maximum number of consecutive zeros within the separators si). Here, k0 is less than ni−2, where i is greater than 1 and less than l (e.g. 1≦i<l). When the lengths ni are small numbers, the constraint k0 is usually defined only by the first two conditions given above.
- By separating the data bits with coded data (e.g. by “code stuffing”), bit errors are contained between separator blocks. More specifically, bit errors are prevented from propagating throughout the bit sequence, thereby minimizing transmission errors by limiting them to a particular block. In this instance, an error in encoded block s1 would effect only the subblock si, but not the data blocks Di.
-
FIG. 5 illustrates an embodiment of thesystem 10 having anRLL encoder 16, an Interleaver (permuter) 20, and achannel 38. As shown, input word {right arrow over (x)} is passed to theRLL encoder 16. TheRLL encoder 16 encodes the input word {right arrow over (x)} into an RLL encoded word {right arrow over (y)}. The RLL encoded word {right arrow over (y)} is then passed to the Interleaver (permuter) 20, which produces an output word {right arrow over (z)}. The output word {right arrow over (z)} is then passed onto thechannel 38. Thechannel 38 may be the inner subchannel 12 (as shown inFIG. 1 ), a communication link between a transmitter and a receiver, or any kind of communication or transmission channel, including magnetic recording channels. - Generally, the
permuter 20 changes the positions of the components in {right arrow over (y)} in a random manner to facilitate the operation of iterative detection scheme shown inFIG. 1 . Additionally, thepermuter 20 must preserve the operation of theRLL encoder 16. Specifically, thepermuter 20 operates in the subblocks rather than on the entire codeword all at once. In this way, the resulting output sequence {right arrow over (z)} satisfies the same or similar constraints as its input sequence {right arrow over (y)}. - The
RLL encoder 16 produces the following output sequence:
{right arrow over (y)}=[D0,s1,D1,S2, . . . ,sl,Dl],
where -
- D0=└x1,x2, . . . ,xN
0 ┘, - D1=└xN
0 +1,xN0 +2, . . . , xN0 +N1 ┘, and - Dl=└xN
0 + . . . +Nl +1,xN0 + . . . +Nl +2, . . . ,xN0 + . . . +Nl +Nl+1 ┘
are the uncoded blocks of the input data sequence {right arrow over (x)}, and
{right arrow over (s)}=[s1,s2, . . . ,sl]
is the output of the m/n-encoder, which converts the last m user bits
D l+1 =└x N0 + . . . +Nl +1 ,x N0 + . . . +Nl +2 , . . . ,x N0 + . . . +Nl +Nl+1 ┘
to the nonzero separators {si,1≦i≦l}.
- D0=└x1,x2, . . . ,xN
- The structure and properties of the output codeword y allows the
permuter 20 to preserve the k-constraint at its output while performing permuting operations, such as swapping any two data blocks (Di and Dj) within the output codeword {right arrow over (y)} without shifting the boundaries between the two data blocks if Di and Dj have the same length or swapping any two separators si and sj in D0=[x1,x2, . . . ,xN0] without shifting boundaries between blocks if si and sj have the same length. - As shown in
FIG. 6 , K data bits 40 are passed through permuter A 42 (for the first l data bits) and through permuter B 44 (for the l+1 data bit) to produce the output codeword 46. When theRLL encoder 16 performs swapping operations, the permuter A 42 swaps data blocks Di, and permuter B 44 swaps the separators si. Thus, none of these swapping operations changes the k-constraint provided by theRLL encoder 16. Thus, the k-constraint is preserved in the proposed scheme combining theRLL encoder 16 andpermuter 20. - A number of pseudo-random and structured permuters are capable of separately shifting the data blocks and the separator blocks in an encoded bit sequence, so as to encode the signal without unwanted shuffling of the coded bits. Thus, the technique can be used in the magnetic recording channels, and other storage and communication systems facilitating the operations of the iterative detection schemes with soft decisions, such as Turbo codes, Low Density Parity Check (LDPC) codes and Turbo-Product Codes (TPC).
-
FIG. 7 illustrates aRLL encoder 16 with a code rate of 32/33 and a k-constraint of k=12. In other words,FIG. 7 shows a (0,12) RLL code withrate 32/33. In this embodiment, the input word of the RLL encoder consists of four bytes and is encoded into a code word oflength 33. - As shown, the 32 data bits are split into five parts Di (specifically D0, D1, D2, D3, and D4) of lengths Ni (N0=4, N1=8, N2=8, and N3=5), and m (m=N4=7), respectively. The first four data parts are directly placed in the output code word without encoding. The last 7 data bits D4 are sent to the 7/8
encoder 36. The output of the 7/8encoder 36 is three nonzero separators s1, s2, and s3 of lengths n1=3, n2=3 and n3=2, respectively. In this case, the first twenty four data bits are placed in the output code word without encoding at positions 0-3, 7-14, 18-25 and 29-33, respectively. The separators s1, s2, s3 are inserted between the uncoded data parts D0, D1, D2, and D3 at positions 4-6, 15-17, and 26-28, respectively. - None of the separators s1, s2, s3 consists of all zeros. Specifically, separators s1 and s2 take nonzero values from the following range of values: (0,0,1); (0,1,0); (0,1,1); (1,0,0); (1,0,1); (1,1,0); and (1,1,1). Separator s3 takes a nonzero value from the following range of values: (0,1); (1,0); and (1,1). Thus, there are 147 words (7×7×3=147) of
length 8, which can be used as output codewords of the 7/8encoder 36. - In this example, the maximum number of consecutive zeros in the coded bit stream is not greater than 12. At the left boundary, data block D0 has 4 bits (4 possible zeros) and separator bit s1 has 3 bits (2 possible zeros). At the right boundary, data block D3 has 5 bits (5 possible zeros) and separator block s3 has 2 bits (1 possible zero). Thus, the maximum number of consecutive zeros at the boundaries is 12.
- Between s1 and s2, the maximum number of consecutive zeros is also 12, corresponding to the 2 possible zeros from each of s1 and s2, and the eight bits (8 possible zeros) associated with data block D1. Between s2 and s3, the maximum number of consecutive zeros is 11, corresponding to the 2 possible zeros from s2, the two bits (1 possible zero) from s3, and the eight bits (8 possible zeros) associated with D2. Therefore, the constructed code has the parameter k=12.
- This example demonstrates that an additional bit could be included without altering the zero constraint k. Specifically, by including an extra data bit in the part D2, it is possible to increase the length of the output code by one bit, resulting in a 33/34 RLL code with k=12 for all portions of the data stream.
- The 7/8 encoder/decoder can be implemented based on simple integer arithmetic. For example, each input bit u=(u0,u1, . . . ,u6) of the 7/8 encoder can be represented as an integer I=0, 1, 2, . . . , 127, using the following equality:
As discussed above, there are 147 possible output words oflength 8 with three nonzero separators s1, s2, and s3, and any 127 of them can be used to encode the final seven input data bits. - To form the encoder, let delta (A) be a predefined constant (0≦Δ≦20), and J=I+Δ. In this example, the integer J satisfies the following inequality:
Δ≦J≦127+Δ<147. - By altering the parameter delta (Δ) within the defined range, it is possible to produce 21 different versions of the encoder using the following encoding steps.
Encoding Step 1.s1 = (001), h = J, if 0 ≦ J < 21; s1 = (010), h = J − 21, if 21 ≦ J < 42; s1 = (011), h = J − 42, if 42 ≦ J < 63; s1 = (100), h = J − 63, if 63 ≦ J < 84; s1 = (101), h = J − 84, if 84 ≦ J < 105; s1 = (110), h = J − 105,if 105 ≦ J < 126; and s1 = (111), h = J − 126, if 126 ≦ J < 147; Encoding step 2.s2 = (001), g = h, if 0 ≦ h < 3; s2 = (010), g = h − 3, if 3 ≦ h < 6; s2 = (011), g = h − 6, if 6 ≦ h < 9; s2 = (100), g = h − 9, if 9 ≦ h < 12; s2 = (101), g = h − 12, if 12 ≦ h < 15; s2 = (110), g = h − 15, if 15 ≦ h < 18; and s2 = (111), g = h − 18, if 18 ≦ h < 21 Encoding step 3.s3 = (01), if g = 0; s3 = (10), if g = 1; and s3 = (11), if g = 2. - The decoding algorithm for the 7/8 decoder is constructed as follows.
- Decoding
Step 1. - Given codewords (c1, c2, and c3), the integer J is reconstructed using the following equality:
J=21*i(s 1)+3*i(s 2)+i(s 3),
where i(s) is an integer represented by the binary word Si. - Decoding
Step 2. - Subtracting Δ from J gives the integer I. The binary representation of I is then sent to the output of the decoder (7 bits).
- The 7/8 encoder/decoder described below effectively suppresses error propagation. To encode data using the 7/8
encoder 36, first the input data bits are split into three parts (a, b, and c) oflengths 3 bits, 3 bits and 1 bit, respectively. The input of the 7/8 encoder is a, b, and c. Theoutput 8 bits are also represented by three parts (s1, s2, and s3) oflength 3 bits, 3 bits, and 2 bits. Theoutput 8 bits are calculated as follows. -
Case 1. If a≠0 and b≠0, then -
- a) s1=a;
- b) s2=b; and
- c) s3=({overscore (c)}).
where {overscore (c)} is the binary compliment of c.
-
Case 2. If a≠0 and b=0, then -
- a) s3=(1,1);and
- b) s1=a and s2=(010), if c=0; or
- c) s1=a and s2=(001), if c=1.
-
Case 3. If a=0 and b≠0, then -
- a) s3=(1,1), and
- b) s1=(010), s2=b, if b∉{(001),(010)} and c=0,
- c) s1=(001), s2=b, if b∈{(001),(010)} and c=1,
- d) s1ε{(100),(011)}, s2=(011), if b=(001); or
- e) s1ε{(110),(101)}, s2=(011), if b=(010).
-
Case 4. If a=0 and b=0, then -
- a) s3=(1,1), and
- b) s1=(100), s2=(100), if c=0; and
- c) s1=(011), s2=(100), if c=1.
- To decode a received encoded signal, the 7/8 encoder/
decoder 36 divides the input codeword into three parts (s1, s2, and s3) having 3 bits, 3 bits and 2 bits, respectively. The output codewords of thedecoder 36 consist of three parts â, {circumflex over (b)} and ĉ oflengths 3 bits, 3 bits and 1 bit, respectively. The output codewords are calculated as follows. - Case 1 (7/8 Decoder). If s3ε{(01), (10)}, then
-
- a) â=s1,
- b) {circumflex over (b)}=s2, and
- c) ĉ is equal to the first component of s3.
- Case 2 (7/8 Decoder). If s3=(1,1) and s2ε{(001), (010)}, then
-
- a) â=s1,
- b) {circumflex over (b)}=(000), and
- c) ĉ is equal to the last component of s2.
- Case 3 (7/8 Decoder). If s3=(1,1), s2={(001), (010)}, but s1∈E {(001), (010)} then
-
- a) â=(000),
- b) {circumflex over (b)}=s2, and
- c) ĉ is eqal to the last component of s1.
- Case 4 (7/8 Decoder). If s3=(1,1), s2=(011), and s1∈{(001), (010)} then
-
- a) â (000),
- b) {circumflex over (b)}=(001), if s1ε{(100), (011)}, or
- c) {circumflex over (b)}=(010), if s1ε{(110), (101)}, and
- d) ĉ is equal to the last component of s1.
- Case 5 (7/8 Decoder). If s3=(1,1), s2=(100), and s1ε{(100), (001)} then
-
- a) â=(000),
- b) {circumflex over (b)}=(000), and
- c) ĉ is equal to the last component of s1.
Thus, thesystem 10 divides the input codeword into several parts, each part being composed of data bits. The last part is passed to the 7/8 encoder/decoder 36, which utilizes the last portion of the input codeword to generate separator codes for stuffing between the other parts, which are inserted into the output codeword without encoding. The resulting output codeword is constrained such that the maximum number of consecutive zeros is determined by the k-constraint. Here, no more than 12 consecutive zeros is possible.
- In
FIG. 8 , a (0,13) RLL code withrate 48/49 is illustrated. In this instance, the input codeword of theencoder 36 consists of 48 bits, and is encoded into an output codeword oflength 49. - First, the 48 data bits are split into six parts D0, D1, . . . , D5 of lengths N0=5, N1=9, N2=9, N3=10, N4=5, and m=N5=10, respectively. The first five data parts D0, . . . ,D4 are placed directly in the output codeword without encoding at bit positions 0-4, 8-16, 20-28, 32-41, and 44-48, respectively.
- The last 10 data bits illustrated as D5 are passed to the 10/11 encoder, which uses D5 to produce an output consisting of four separators, s1, s2, s3, and s4. The output separators s1, s2, s3, and s4 have lengths n1=3, n2=3, n3=3 and n4=2, respectively.
- None of the separators s1, s2, s3, and s4 consists of all zeros. Each separator s1, s2, and s3 takes one of the following seven nonzero values: (0,0,1); (0,1,0); (0,1,1); (1,0,0); (1,0,1); (1,1,0); and (1,1,1). The separator s4 takes one of three nonzero values: (0,1); (1,0); and (1,1). Therefore, there are total 1,029 words (7*7*7*3=1,029) of
length 11 that can be used as output codewords of the 10/11encoder 36. These codewords form the rows of the separator matrix of size 1029×11 with the following parameters: -
- M=1029, n=11, l=4,
- n1=n2=n3=3, n4=2,
- v0=2, v1=v2=4, v3=3, and v4=1, and any one of the 1024 of them may be used to construct the 48/49 RLL code with k=13.
- In this example, the maximum number of consecutive zeros in the coded bit stream is not greater than 13. At the left boundary, data block D0 has 5 bits (5 possible zeros) and separator block s1 has 3 bits (2 possible zeros). At the right boundary, data block D4 has 5 bits (5 possible zeros) and separator block s4 has 2 bits (1 possible zero). Thus, the maximum number of consecutive zeros at the boundaries is 13.
- Between s1 and s2, the maximum number of consecutive zeros is 13, corresponding to the 2 possible zeros from each of s1 and s2, and the 9 bits (9 possible zeros) associated with data block D1. Between s2 and s3, the maximum number of consecutive zeros is 13, corresponding to the 2 possible zeros from each of s2 and s3, and the 9 bits (9 possible zeros) associated with D2. Between s3 and s4, the maximum number of consecutive zeros is 13, corresponding to the 2 possible zeros from s3, to the 1 possible zero from s4, and to the 10 bits (10 possible zeros) associated with D3. Therefore, the constructed code has the parameter k=13.
- The input blocks D5 of the 10/11
encoder 36 consists of 10 bits, which are represented by an integer I=0, 1, 2, . . . , 1023. Here, there are 1,029 possible codewords oflength 11 bits. Each codeword consists of four nonzero separators (s1, s2, s3, and s4), and 1024 of these words can be used to encode the 10 data bits as follows. Let A be some predefined constant (0≦Δ<5), and J=I+Δ. By changing parameter Δ, it is possible to construct 5 different versions of the encoder as follows.Encoding step 1 (10/11 Encoder). s1 = (001), h = J, if 0 ≦ J < 147; s1 = (010), h = J − 147, if 147 ≦ J < 294; s1 = (011), h = J − 294, if 294 ≦ J < 441; s1 = (100), h = J − 441, if 441 ≦ J < 588; s1 = (101), h = J − 588, if 588 ≦ J < 735; s1 = (110), h = J − 735, if 735 ≦ J < 882; and s1 = (111), h = J − 882, if 882 ≦ J < 1029. Encoding step 2 (10/11 Encoder). s2 = (001), g = h, if 0 ≦ h < 21; s2 = (010), g = h − 21, if 21 ≦ h < 42; s2 = (011), g = h − 42, if 42 ≦ h < 63; s2 = (100), g = h − 63, if 63 ≦ h < 84; s2 = (101), g = h − 84, if 84 ≦ h < 105; s2 = (110), g = h − 105, if 105 ≦ h < 126; and s2 = (111), g = h − 126, if 126 ≦ h < 147. Encoding step 3 (10/11 Encoder). s3 = (001), f = g, if 0 ≦ g < 3; s3 = (010), f = g − 3, if 3 ≦ g < 6; s3 = (011), f = g − 6, if 6 ≦ g < 9; s3 = (100), f = g − 9, if 9 ≦ g < 12; s3 = (101), f = g − 12, if 12 ≦ g < 15; s3 = (110), f = g − 15, if 15 ≦ g < 18; and s3 = (111), f = g − 18, if 18 ≦ g < 21 Encoding step 4 (10/11 Encoder). s4 = (01), if f = 0; s4 = (10), if f = 1; and s4 = (11), if f = 2;
The decoding algorithm for the 10/11 decoder is constructed as follows. - Decoding
Step 1. - Given separators s1, s2, s3, and s4, the integer J can be reconstructed using the following formula:
J=147*i(s 1)+21*i(s 2)+3*i(s 3)+i(s 4),
where i(s) is an integer represented by binary word s. - Decoding
Step 2. - Subtracting Δ from J results in I, the binary representation of which is passed to the output of the decoder (10 bits).
- As with the 7/8 encoder, the 10/11 encoder produces separator blocks for insertion into the output codeword. The separator blocks serve to limit error propagation during transmission. In this embodiment, the
system 10 divides the input data bits into four parts (a, b, c, and d). The 11 output data bits are also partitioned into four corresponding parts (s1, s2, s3, and s4) of lengths n1=n2=n3=3 and n4=2 bits. Depending on the bits in the four input parts (a, b, c, and d), the output bits are calculated as follows. -
Case 1. If a≠0, b≠0, and c≠0, then -
- a) s1=a;
- b) s2=b;
- c) s3=c; and
- d) s4=(d, {overscore (d)}), where {overscore (d)} is the binary compliment of d.
-
Case 2. If a≠0, b=0, and c≠0, then -
- a) s1=a;
- b) s3=C;
- c) s4=(1,1)
- and s2 is defined by a and d according to the following replacement Table 1.
TABLE 1 a d s 2 0 0 1 0 0 1 0 0 0 1 1 0 0 1 0 1 0 0 1 0 0 0 1 0 1 0 1 1 0 1 1 0 1 1 0 0 1 1 1 1 0 1 1 0 0 0 0 0 1 1 0 0 1 1 1 1 1 0 1 0 0 1 0 1 0 1 1 0 1 1 1 1 0 0 1 0 0 1 1 0 1 1 0 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 -
Case 3. If a=0, b≠0, and c≠0, then -
- a) s2=b;
- b) s3=C;
- c) s4=(1,1), and
- d) s1 is defined by b and d according to the following replacement Table 2.
TABLE 2 b d s 1 0 0 1 0 0 1 0 0 0 1 1 1 1 1 0 1 0 0 1 0 0 0 1 0 1 1 1 0 0 1 1 0 0 0 1 0 1 1 1 0 1 1 1 0 0 0 1 0 1 1 0 0 1 1 1 1 1 0 1 0 1 0 0 1 0 1 1 0 1 0 1 1 0 0 0 0 1 1 1 0 1 1 1 0 1 1 1 0 1 0 1 1 1 1 1 0 1 1 -
Case 4. If a=0, b=0, and c≠0, then -
- a) s3=C;
- b) s4=(1,1); and
- c) s1 and s2 are defined by d according to the following ment Table 3
TABLE 3 s1 s2 d 1 1 1 1 0 1 0 1 1 0 1 1 1 1 -
Case 5. If a≠0, b≠0, and c=0, then -
- a) s1=a;
- b) s3=b;
- c) s4=(1,1), and
- a) s1=a;
- d) s2 is defined by a and d by the following replacement Table 4.
TABLE 4 a d s 2 0 0 1 0 1 0 0 0 0 1 1 1 0 1 0 1 0 0 0 1 0 0 1 0 1 1 1 1 0 1 1 0 0 1 0 0 1 1 1 0 0 1 1 0 0 0 1 0 0 1 0 0 1 0 1 1 1 0 1 0 1 1 0 1 0 1 1 0 0 1 1 1 0 0 0 0 1 1 1 0 1 0 1 1 1 1 1 0 0 1 0 1 1 1 1 0 1 1 -
Case 6. If a=0, b≠0, and c=0, then -
- a) s4=(1,1), and
- b) s1, s2, and s3 are defined by a and d according to the following replacement Table 5.
TABLE 5 b d s1 s2 s3 0 0 0 0 1 0 0 1 1 0 1 0 1 0 0 0 1 1 0 0 1 1 0 1 1 0 0 0 1 0 1 0 1 1 0 1 1 0 1 0 0 1 1 1 0 1 1 0 1 1 1 0 0 1 0 0 0 1 0 1 1 0 0 1 1 0 1 0 1 0 1 0 1 1 0 1 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 0 1 1 1 0 0 1 1 1 1 1 1 0 1 0 0 0 0 1 1 1 0 0 0 0 1 1 0 0 1 0 1 1 1 0 0 0 1 0 1 0 1 0 1 0 1 1 0 1 0 0 1 1 0 1 1 1 0 1 1 0 1 0 1 0 1 1 0 0 1 0 0 1 1 0 0 0 1 1 1 0 1 1 0 0 1 1 0 0 1 0 1 1 1 0 0 0 1 1 1 1 0 0 1 1 1 1 1 0 0 1 1 1 1 0 1 0 -
Case 7. If a≠0, b=0, and c=0, then -
- a) s4=(1,1), and
- b) s1, s2, and s3 are defined by a and d by the following lacement Table 6.
TABLE 6 a d s1 s2 s3 0 0 1 0 0 0 1 1 1 1 0 1 1 0 0 1 1 0 0 1 1 1 1 1 0 0 0 1 0 0 0 1 0 1 1 0 0 0 1 0 1 0 1 0 1 0 1 1 0 0 1 0 0 1 1 0 0 1 1 1 0 0 0 1 1 0 1 1 1 0 1 1 1 0 0 1 0 0 1 0 0 0 1 0 0 1 1 0 0 1 1 1 0 0 1 1 0 0 1 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 1 1 0 1 1 1 0 1 1 0 1 1 0 0 1 1 0 0 0 1 0 1 1 0 1 0 1 1 1 0 1 0 1 0 1 1 0 1 1 0 1 1 1 0 0 1 1 1 0 0 1 0 1 1 1 1 1 0 1 1 1 0 0 1 1 0 - Decoding of a received encoded signal operates similar to the decoding of a 7/8 encoded signal as described with respect to
FIG. 7 above. To decode a received encoded signal, the 10/11 encoder/decoder 36 divides the input codeword into four parts (s1, s2, s3, and s4) having 3 bits, 3 bits, 3 bits, and 2 bits, respectively. The output codewords of thedecoder 36 consist of four parts â, {circumflex over (b)}, ĉ and {circumflex over (d)} oflengths 3 bits, 3 bits, 3 bits, and 1 bit, respectively. The output words are calculated as follows. - By assigning a0=i(s1), b0=i(s2), and c0=i(s3), a0, b0, and c0 are integers resenting the separators s1, s2, s3, and s4, respectively.
-
Case 1. If s4ε{(01), (10)}, then -
- a) â=s1;
- b) {circumflex over (b)}=s2;
- c) ĉ=s3; and
- d) {circumflex over (d)} is equal to the first component of s4.
-
Case 2. Let T1(i) be the last three bits of the i-th row of Table 1. If s3=(1,1) and s2=T1(a0), then -
- a) â=s1;
- b) {circumflex over (b)}=(000);
- c) ĉ=s3; and
- d) {circumflex over (d)} is equal to the last bit of s2, if a0≠4; and is equal to the second bit of s2, if a0=4.
-
Case 3. Let T2(i) be the last three bits of the i-th row of Table 2. If s3=(1,1) and s1=T2(b0), then -
- a) â=(000);
- b) {circumflex over (b)}=2;
- c) ĉ=3;
- d) {circumflex over (d)} is equal to the second bit of s1, if b0>1; and is equal to the last bit of s1, if b0=1.
-
Case 4. If (a0=7 and b0=5), or (a0=6 and b0=7), then -
- a) â=(000);
- b) {circumflex over (b)}=(000);
- c) ĉ=s3;
- d) {circumflex over (d)} is equal to the second bit of s2.
-
Case 5. Let T4(i) be the last three bits of the i-th row of Table 4. If s4=(1,1) and s4=T2(a0), then -
- a) â=s1;
- b) {circumflex over (b)}=s3;
- c) ĉ=(000);
- d) {circumflex over (d)} is equal to the second bit of s2, if a0=6; and is equal to the last bit of s2, if a0≠6.
- Table 7 is used for decoding in
cases
j=3*(a 0−1)+(c 0−1−(c 0−1)mod2)/2.TABLE 7 0 1 1 1 1 0 0 1 0 0 1 1 1 0 1 0 0 0 1 0 1 1 1 0 0 1 0 0 1 0 1 1 0 1 1 1 1 1 1 0 0 1 0 0 1 0 0 0 0 1 0 1 1 1 0 1 0 0 0 1 -
Case 6. If s4=(1,1) and t7(j)=0, then -
- a) â=(000);
- b) {circumflex over (b)}=T7(j);
- c) ĉ=(000); and
- d) {circumflex over (d)}=(c0−1)mod2.
-
Case 7. If s4=(1,1) and t7(j)≠0, then -
- a) â=T7(j);
- b) {circumflex over (b)}=(000);
- c) ĉ=(000); and
- d) {circumflex over (d)}=(c0−1)mod2.
- While decoding has been described mathematically above, it is important to understand that the
system 10 preserves the main portion of the data bits “as is”, encoding only a part of the data bits for use as separator blocks. Thus, the main portion of the data bits can be permuted by thechannel interleaver 20, arbitrarily, and without violating the k-constraint. Thechannel interleaver 20 may also permute the separator blocks formed from the part of the data bits without violating the k-constraint. Specifically, thechannel interleaver 20 may permute the data bits and the separator blocks without altering the maximum number of consecutive zeros defined by the k-constraint. Thus, thesystem 10 may be utilized with various types of iterative detection schemes based on turbo codes, LDPC codes, and other similar coding schemes. Moreover, the separator blocks limit error propagation. -
FIG. 9-17 illustrate the BER and SFR characteristics of the 48/49 RLL code with k=13 construct by stuffing of 4 separators in the data stream.FIG. 9 shows a graph of the bit error rates (BERs) before and after the RLL decoder of the present invention and before Error Correction Coding. Using a code rate of 48/49 with a k-constraint of 13, and a GPR target oflength 5 and an ND of 2.5 (and no jitter), thesystem 10 provided an output code before and after the RLL decoder with a very small error propagation. Specifically, the before and after RLL lines are almost on top of one another. -
FIG. 10 illustrates the BERs of the same RLL code before and after decoding with a conventional RLL decoder. As shown, bit errors are propagated by conventional RLL decoders. Specifically, the before RLL and after RLL graph lines do not overlap, and the after RLL decoder graph line shows a higher bit error rate than the line showing the signal before the RLL decoder is used. -
FIG. 11 illustrates the conventional RLL code as compared with the RLL code of the present invention. Both the conventional and the new RLL code have a code rate of 48/49 and a k-constraint of 13. As shown, with no jitter, the new RLL code of the present invention has a better sector failure rate than the conventional RLL code, particularly at higher signal to noise ratios. For example, at a signal to noise ratio of 19, the new RLL code has a sector failure rate of 2×10−8 as compared to 8.5×10−8 of the conventional RLL code. In data storage and retrieval systems, such error rate improvements are significant. -
FIGS. 12 and 13 illustrate the BERs of the new RLL code as compared with the conventional RLL code, both before and after the RLL decoder. As shown, both the new and the conventional RLL codes tested had a code rate of 48/49, a k-constraint of 13, a GPR target oflength 5, a ND of 2.5, and 50/50 jitter. InFIG. 12 , the BERs before and after RLL decoding are almost identical, indicating that there is very little error propagation during the decoding process. InFIG. 13 , by contrast, there is a visible difference between the graphs showing before and after RLL decoding. Specifically, at every data point, the line showing after RLL decoding is visibly worse than the graph of the before RLL decoding. At a signal to noise ratio of 12, for example, the bit error rate is 2.5×10−2 before the RLL decoder versus 3×10−2 after the RLL decoder. -
FIG. 14 illustrates the sector failure rates of the conventional RLL code versus the new RLL code of the present invention. Here, the code rate was 48/49 and the k-constraint was 13. The data was written in a longitudinal channel with a GPR target oflength 5. Noise consisted of 50% jitter and 50% electronic noise. As shown, the sector failure rates are approximately the same between the conventional RLL code and the RLL code of the present invention, and at high signal to noise ratios (such as 20), then RLL code of the present invention slightly out-performs the conventional RLL code. It is significant that the RLL code of the present invention performs as well as the conventional code with respect to the sector failure rate, since noticeable degradation in the sector failure rate would be intolerable. -
FIGS. 15 and 16 illustrate the BERs of the new RLL code as compared with the conventional RLL code, before and after decoding using the RLL decoder in perpendicular magnetic recording. In both cases, the RLL code had a code rate of 48/49 and a k-constraint of 13.FIG. 15 once again demonstrates a very small error propagation between the before and after graphs of the invented code. By contrast,FIG. 16 shows a visible separation between the graphs of the BER before and after RLL decoding. -
FIG. 17 illustrates the sector failure rate of the conventional RLL codes and the RLL codes of the present invention, with a code rate of 48/49 and a k-constraint of 13. Here, the sector failure rates are approximately equal, with the RLL code of the present invention slightly outperforming the conventional RLL code at a signal to noise ratio between 18.5 and 19.5. - The code stuffing technique of the present invention, while described with respect to 7/8 and 48/49 encoders, may be implemented using other code rates and with different encoders. Regardless of the encoder and the code rate, the
system 10 divides the input codeword into parts and encodes one of the parts to form separator bits for stuffing between the remaining, unencoded parts of the input code word to form an output code word. This allows for the use of Interleavers/permuters for randomly reordering the bits, without effecting the k-constraint and without allowing errors to propagate throughout the output codeword. - Although the present invention has been described with reference to preferred embodiments, workers skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention.
Claims (23)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/767,831 US6933865B1 (en) | 2004-01-29 | 2004-01-29 | Method and apparatus for coded symbol stuffing in recording systems |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/767,831 US6933865B1 (en) | 2004-01-29 | 2004-01-29 | Method and apparatus for coded symbol stuffing in recording systems |
Publications (2)
Publication Number | Publication Date |
---|---|
US20050168358A1 true US20050168358A1 (en) | 2005-08-04 |
US6933865B1 US6933865B1 (en) | 2005-08-23 |
Family
ID=34807754
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/767,831 Expired - Lifetime US6933865B1 (en) | 2004-01-29 | 2004-01-29 | Method and apparatus for coded symbol stuffing in recording systems |
Country Status (1)
Country | Link |
---|---|
US (1) | US6933865B1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060026483A1 (en) * | 2004-08-02 | 2006-02-02 | Sony Corporation And Sony Electronics, Inc. | Error correction compensating ones or zeros string suppression |
US7409622B1 (en) * | 2005-11-10 | 2008-08-05 | Storage Technology Corporation | System and method for reverse error correction coding |
US20130290703A1 (en) * | 2012-04-25 | 2013-10-31 | Cleversafe, Inc. | Encrypting data for storage in a dispersed storage network |
US20180150351A1 (en) * | 2016-11-28 | 2018-05-31 | Alibaba Group Holding Limited | Efficient and enhanced distributed storage clusters |
US10621044B2 (en) | 2012-04-25 | 2020-04-14 | Pure Storage, Inc. | Mapping slice groupings in a dispersed storage network |
US10795766B2 (en) | 2012-04-25 | 2020-10-06 | Pure Storage, Inc. | Mapping slice groupings in a dispersed storage network |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006155704A (en) * | 2004-11-26 | 2006-06-15 | Hitachi Global Storage Technologies Netherlands Bv | Data recording and reproducing processing method and circuit, data recording and reproducing apparatus |
US7404133B2 (en) * | 2004-12-12 | 2008-07-22 | Hewlett-Packard Development Company, L.P. | Error detection and correction employing modulation symbols satisfying predetermined criteria |
US7523375B2 (en) * | 2005-09-21 | 2009-04-21 | Distribution Control Systems | Set of irregular LDPC codes with random structure and low encoding complexity |
JP2007087529A (en) * | 2005-09-22 | 2007-04-05 | Rohm Co Ltd | Signal decoding device, signal decoding method and storage system |
KR100987692B1 (en) * | 2006-05-20 | 2010-10-13 | 포항공과대학교 산학협력단 | Apparatus and method for transmitting/receiving signal in a communication system |
EP2758867A4 (en) * | 2011-10-27 | 2015-07-08 | Lsi Corp | Digital processor having instruction set with complex exponential non-linear function |
GB2506159A (en) | 2012-09-24 | 2014-03-26 | Ibm | 2 Stage RLL coding, standard coding with global/interleave constraints, then sliding window substitution with sequences having different constraints |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5028922A (en) * | 1989-10-30 | 1991-07-02 | Industrial Technology Research Institute | Multiplexed encoder and decoder with address mark generation/check and precompensation circuits |
US5233348A (en) * | 1992-03-26 | 1993-08-03 | General Instrument Corporation | Variable length code word decoder for use in digital communication systems |
US5969649A (en) * | 1998-02-17 | 1999-10-19 | International Business Machines Corporation | Run length limited encoding/decoding with robust resync |
US5999110A (en) * | 1998-02-17 | 1999-12-07 | International Business Machines Corporation | Defect tolerant binary synchronization mark |
US6072410A (en) * | 1997-10-07 | 2000-06-06 | Samsung Electronics Co., Ltd. | Coding/decoding method for high density data recording and reproduction |
US6081208A (en) * | 1994-12-28 | 2000-06-27 | Kabushiki Kaisha Toshiba | Image information encoding/decoding system |
US6344807B1 (en) * | 1999-09-24 | 2002-02-05 | International Business Machines Corporation | Packet-frame generator for creating an encoded packet frame and method thereof |
-
2004
- 2004-01-29 US US10/767,831 patent/US6933865B1/en not_active Expired - Lifetime
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5028922A (en) * | 1989-10-30 | 1991-07-02 | Industrial Technology Research Institute | Multiplexed encoder and decoder with address mark generation/check and precompensation circuits |
US5233348A (en) * | 1992-03-26 | 1993-08-03 | General Instrument Corporation | Variable length code word decoder for use in digital communication systems |
US6081208A (en) * | 1994-12-28 | 2000-06-27 | Kabushiki Kaisha Toshiba | Image information encoding/decoding system |
US6072410A (en) * | 1997-10-07 | 2000-06-06 | Samsung Electronics Co., Ltd. | Coding/decoding method for high density data recording and reproduction |
US5969649A (en) * | 1998-02-17 | 1999-10-19 | International Business Machines Corporation | Run length limited encoding/decoding with robust resync |
US5999110A (en) * | 1998-02-17 | 1999-12-07 | International Business Machines Corporation | Defect tolerant binary synchronization mark |
US6344807B1 (en) * | 1999-09-24 | 2002-02-05 | International Business Machines Corporation | Packet-frame generator for creating an encoded packet frame and method thereof |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060026483A1 (en) * | 2004-08-02 | 2006-02-02 | Sony Corporation And Sony Electronics, Inc. | Error correction compensating ones or zeros string suppression |
US7242325B2 (en) * | 2004-08-02 | 2007-07-10 | Sony Corporation | Error correction compensating ones or zeros string suppression |
US7409622B1 (en) * | 2005-11-10 | 2008-08-05 | Storage Technology Corporation | System and method for reverse error correction coding |
US20130290703A1 (en) * | 2012-04-25 | 2013-10-31 | Cleversafe, Inc. | Encrypting data for storage in a dispersed storage network |
US9380032B2 (en) * | 2012-04-25 | 2016-06-28 | International Business Machines Corporation | Encrypting data for storage in a dispersed storage network |
US10042703B2 (en) | 2012-04-25 | 2018-08-07 | International Business Machines Corporation | Encrypting data for storage in a dispersed storage network |
US10621044B2 (en) | 2012-04-25 | 2020-04-14 | Pure Storage, Inc. | Mapping slice groupings in a dispersed storage network |
US10795766B2 (en) | 2012-04-25 | 2020-10-06 | Pure Storage, Inc. | Mapping slice groupings in a dispersed storage network |
US11669397B2 (en) | 2012-04-25 | 2023-06-06 | Pure Storage, Inc. | Partial task processing with data slice errors |
US20180150351A1 (en) * | 2016-11-28 | 2018-05-31 | Alibaba Group Holding Limited | Efficient and enhanced distributed storage clusters |
US10268538B2 (en) * | 2016-11-28 | 2019-04-23 | Alibaba Group Holding Limited | Efficient and enhanced distributed storage clusters |
Also Published As
Publication number | Publication date |
---|---|
US6933865B1 (en) | 2005-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7486208B2 (en) | High-rate RLL encoding | |
US7409622B1 (en) | System and method for reverse error correction coding | |
US6933865B1 (en) | Method and apparatus for coded symbol stuffing in recording systems | |
US7719444B2 (en) | Modulation coding | |
US20070011551A1 (en) | Signal, storage medium, method and device for encoding, method and device for decoding | |
CN102187395B (en) | Data processing device and method | |
JP2008544686A (en) | Method and apparatus for low density parity check coding | |
US7734993B2 (en) | Method and apparatus for encoding and precoding digital data within modulation code constraints | |
US20120166912A1 (en) | Interleaving parity bits into user bits to guarantee run-length constraint | |
KR101211244B1 (en) | Modulation coding and decoding | |
US6229458B1 (en) | Rate 32/34 (D=0, G=9/I=9) modulation code with parity for a recording channel | |
US8276038B2 (en) | Data storage systems | |
KR101120780B1 (en) | Reverse concatenation for product codes | |
US6204781B1 (en) | General rate N/(N+1) (0, G) code construction for data coding | |
US7395482B2 (en) | Data storage systems | |
CN100426407C (en) | Data storage systems | |
US7191386B2 (en) | Method and apparatus for additive trellis encoding | |
US9419653B1 (en) | Systems and methods for combining constrained codes and error correcting codes | |
US6788223B2 (en) | High rate coding for media noise | |
US7137056B2 (en) | Low error propagation rate 32/34 trellis code | |
US7741980B2 (en) | Providing running digital sum control in a precoded bit stream using precoder aware encoding | |
Abdel-Chaffar et al. | Analysis of coding schemes for modulation and error control | |
Mittelholzer et al. | Reverse concatenation of product and modulation codes | |
Bin et al. | A new ECCIRLL coding scheme | |
Fan et al. | Constrained Coding for Hard Decoders |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SEAGATE TECHNOLOGY LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KUZNETSOV, ALEXANDER VASILIEVICH;KURTAS, EROZAN;REEL/FRAME:014949/0103;SIGNING DATES FROM 20040121 TO 20040123 |
|
FEPP | Fee payment procedure |
Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
REFU | Refund |
Free format text: REFUND - PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: R1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Free format text: REFUND - SURCHARGE FOR LATE PAYMENT, LARGE ENTITY (ORIGINAL EVENT CODE: R1554); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT Free format text: SECURITY AGREEMENT;ASSIGNORS:MAXTOR CORPORATION;SEAGATE TECHNOLOGY LLC;SEAGATE TECHNOLOGY INTERNATIONAL;REEL/FRAME:022757/0017 Effective date: 20090507 Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATE Free format text: SECURITY AGREEMENT;ASSIGNORS:MAXTOR CORPORATION;SEAGATE TECHNOLOGY LLC;SEAGATE TECHNOLOGY INTERNATIONAL;REEL/FRAME:022757/0017 Effective date: 20090507 |
|
AS | Assignment |
Owner name: MAXTOR CORPORATION, CALIFORNIA Free format text: RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:025662/0001 Effective date: 20110114 Owner name: SEAGATE TECHNOLOGY INTERNATIONAL, CALIFORNIA Free format text: RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:025662/0001 Effective date: 20110114 Owner name: SEAGATE TECHNOLOGY LLC, CALIFORNIA Free format text: RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:025662/0001 Effective date: 20110114 Owner name: SEAGATE TECHNOLOGY HDD HOLDINGS, CALIFORNIA Free format text: RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:025662/0001 Effective date: 20110114 |
|
AS | Assignment |
Owner name: THE BANK OF NOVA SCOTIA, AS ADMINISTRATIVE AGENT, Free format text: SECURITY AGREEMENT;ASSIGNOR:SEAGATE TECHNOLOGY LLC;REEL/FRAME:026010/0350 Effective date: 20110118 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
AS | Assignment |
Owner name: SEAGATE TECHNOLOGY LLC, CALIFORNIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT AND SECOND PRIORITY REPRESENTATIVE;REEL/FRAME:030833/0001 Effective date: 20130312 Owner name: SEAGATE TECHNOLOGY INTERNATIONAL, CAYMAN ISLANDS Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT AND SECOND PRIORITY REPRESENTATIVE;REEL/FRAME:030833/0001 Effective date: 20130312 Owner name: SEAGATE TECHNOLOGY US HOLDINGS, INC., CALIFORNIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT AND SECOND PRIORITY REPRESENTATIVE;REEL/FRAME:030833/0001 Effective date: 20130312 Owner name: EVAULT INC. (F/K/A I365 INC.), CALIFORNIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT AND SECOND PRIORITY REPRESENTATIVE;REEL/FRAME:030833/0001 Effective date: 20130312 |
|
FPAY | Fee payment |
Year of fee payment: 12 |