CN113168360B - Data driven ICAD graphics generation - Google Patents

Data driven ICAD graphics generation Download PDF

Info

Publication number
CN113168360B
CN113168360B CN201980080329.4A CN201980080329A CN113168360B CN 113168360 B CN113168360 B CN 113168360B CN 201980080329 A CN201980080329 A CN 201980080329A CN 113168360 B CN113168360 B CN 113168360B
Authority
CN
China
Prior art keywords
bits
data
content
storage device
bit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980080329.4A
Other languages
Chinese (zh)
Other versions
CN113168360A (en
Inventor
D·D·亚伯拉罕
E·沙龙
O·芬兹伯
R·扎米尔
S·阿赫滕伯格
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Western Digital Technologies Inc
Original Assignee
Western Digital Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/452,466 external-priority patent/US10862512B2/en
Application filed by Western Digital Technologies Inc filed Critical Western Digital Technologies Inc
Publication of CN113168360A publication Critical patent/CN113168360A/en
Application granted granted Critical
Publication of CN113168360B publication Critical patent/CN113168360B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/3723Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 using means or methods for the initialisation of the decoder
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1105Decoding
    • H03M13/1111Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/3746Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 with iterative decoding

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)

Abstract

The present disclosure provides a storage device that may include a decoder configured to connect bits to a content node based on a content aware decoding process. The content aware decoding process may be dynamic and determine the connection structure of bits and content nodes based on patterns in the data. In some cases, the decoder may connect non-adjacent bits to a content node based on a content aware decoding process. In other cases, the decoder may connect a first number of bits to a first content node and a second number of bits to a second content node. In this case, the first number of bits and the second number of bits are different in number.

Description

Data driven ICAD graphics generation
Cross Reference to Related Applications
This patent application claims priority from U.S. patent application Ser. No. 16/452466, filed on even 25 th 6 of 2019 (which is a continuation-in-part application of co-pending U.S. patent application Ser. No. 16/137256 filed on even 20 of 2018), which is incorporated herein by reference in its entirety.
Background
Technical Field
Embodiments of the present disclosure relate generally to data decoding for computer applications. More particularly, aspects of the present disclosure relate to content aware decoding methods and systems.
Description of related Art
The reliability of Solid State Drives (SSDs) is a key factor in distinguishing these drives from other conventional memory arrangements. Such SSDs need to have long-term durability and memory capabilities, especially at the end of drive life.
In order to achieve high reliability of the SSD, data stored on the SSD is protected so that it can be recovered in the event of a failure. Recovery systems may vary, but most typically use protection of Error Code Correction (ECC) codes. Most commonly, the ECC code includes a low density parity check code (LDPC) for use with a soft decoder.
Soft ECC decoders have several features that allow the decoder to improve its performance. The soft ECC decoder is programmed to read the data and by knowing the underlying statistics of the encoded data, a more accurate recovery prediction can be made. In the case where the underlying statistics are not known, the ECC may use default parameters, which may correspond to the case where the data is evenly distributed.
However, these conventional decoder methods have a number of drawbacks, including a large number of bit flips that can lead to false estimates of the underlying statistics. Adjusting decoder parameters based on erroneous estimated statistics may lead to negative effects. The decoding delay may increase and the power consumption may also increase based on inaccurate decoding.
In more extreme cases, decoding may degrade to the point that decoding may fail due to the number of errors present within the data. Since reliability is an important factor, manufacturers seek to provide methods and apparatus that do not suffer from these significant drawbacks.
It is desirable to provide a cost effective method and apparatus that can use existing data and underlying statistics to decode the data to prevent data loss.
It is further desirable to provide such methods and apparatus that may be used in conjunction with SSD technology.
There is still a further need to provide methods and apparatus for recovering data more correctly than conventional techniques and apparatus.
Disclosure of Invention
The present disclosure relates generally to content aware decoding methods and arrangements. Basic statistics of the data to be recovered are obtained and the decoder can utilize the data using several methods, thereby achieving an increase in the correction capability of the decoder and a reduction in decoding delay and improved power consumption.
In one embodiment, a method is disclosed that includes: obtaining data from a memory; estimating a probability of a data value of the obtained data based on at least one of a source log-likelihood ratio and a channel log-likelihood ratio, wherein each bit in the obtained data has an associated log-likelihood ratio; determining at least one data pattern parameter of the data based on estimating a probability of the data value, wherein the at least one data pattern parameter comprises information about each bit in the obtained data; and performing a decoding process using the at least one data pattern parameter to determine a decoded data set.
In another embodiment, a method is disclosed that includes: obtaining data from the memory, applying the data to one or more symbol nodes of the network; estimating a probability of a data value of the obtained data based on at least one of a source log-likelihood ratio, a channel log-likelihood ratio, and a list of symbol probability distributions, wherein each bit of the obtained data has an associated log-likelihood ratio and is associated with a value in each bit position, and wherein the list of symbol probability distributions is based on an occurrence probability of each symbol; determining at least one data pattern parameter for the data based on the estimate, wherein the at least one data pattern parameter includes information about each bit in the obtained data; and performing a decoding process using the at least one data pattern parameter to determine a decoded data set.
In further exemplary embodiments, a method may be performed that includes obtaining data from a memory, the data requiring decoding; comparing the data from the memory to a patch list and enhancing the data from the memory to produce enhanced data when the comparing the data from the memory to the patch list indicates a match of the data to a particular patch; and decoding the enhanced data.
In another exemplary embodiment, a method is disclosed that includes: obtaining data of a memory block, wherein the data is a sample of bits of a data block of a given length, each of the bits having a bit value; comparing the snapshot with a predetermined data pattern; estimating at least one bit value of the data based on the comparison; and adjusting the bit value based on the estimate.
In another exemplary embodiment, a device is disclosed that includes at least one memory device and a controller coupled to the at least one memory device, the controller configured to provide a content aware decoding process configured to reduce a range of the decoding process, wherein the content aware decoding process samples a data block in one or more memory devices to estimate bit values in the data block based on log likelihood ratios, and wherein the log likelihood ratios are associated with a number of ones and zeros in each bit position.
In another non-limiting embodiment, an arrangement is disclosed that includes means for retrieving data from a memory; means for estimating a probability of a data value of the obtained data based on at least one of a source log-likelihood ratio and a channel log-likelihood ratio, wherein each bit in the obtained data has an associated log-likelihood ratio; means for determining at least one data pattern parameter of the data based on an estimate of a probability of the data value, wherein the at least one data pattern parameter comprises information about each bit in the obtained data; and means for performing a decoding process using the at least one data pattern parameter to determine a decoded data set.
In another exemplary embodiment, a storage device is disclosed that includes an interface, at least one memory device, a controller coupled to the at least one memory device, and a decoder configured to connect non-adjacent bits to a content node based on a content aware decoding process.
In another non-limiting embodiment, a storage device is disclosed that includes an interface, at least one memory device, a controller coupled to the at least one memory device, and a decoder configured to connect a first number of bits to a first content node and a second number of bits to a second content node based on a content aware decoding process, wherein the first number of bits and the second number of bits are different.
In further exemplary embodiments, a storage device is disclosed that includes means for determining a pattern in data and means for connecting bits to a set of content nodes based on the pattern in data, wherein the bits are non-adjacent, unequal in number, or a combination thereof.
Drawings
So that the manner in which the above recited features of the present disclosure can be understood in detail, a more particular description of the disclosure, a brief summary of the disclosure, a more particular description of the disclosure, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this disclosure and are therefore not to be considered limiting of its scope, for the disclosure may admit to other equally effective embodiments.
Fig. 1 is an arrangement of a host device and a storage device with accompanying interfaces and decoders.
Fig. 2 is a diagram of data bytes of a text file in ASCII format.
Fig. 3 is a diagram of an iterative content aware decoder. Fig. 3A is a schematic diagram of a message transmitted to a single symbol node of a neighboring bit node.
Fig. 4 is a flow chart of a method of utilizing a data pattern by a multi-patch technique.
Fig. 5 is a flow chart of a method of utilizing data patterns by a data interpolation technique.
Fig. 6 is a chart enhancing correction capability via use of aspects of the present disclosure.
Fig. 7 is a graph of reducing decoding delay via use of aspects of the present disclosure.
Fig. 8 is a diagram of a dynamic iterative content aware decoder configured to connect non-adjacent bits to a content node.
Fig. 9 is a graph of an autocorrelation function.
Fig. 10 is a diagram of a dynamic iterative content aware decoder configured to connect different numbers of bits to two content nodes.
FIG. 11 is a diagram of a dynamic iterative content aware decoder configured to connect bits to three content nodes, where two of the three content nodes are connected to the same number of bits.
Fig. 12 is a diagram of a dynamic iterative content aware decoder configured to connect different numbers of bits to three content nodes.
Fig. 13 is a graph of improving correction capability via use of a dynamic iterative content aware decoder.
Fig. 14 is a graph of reducing decoding delay via use of a dynamic iterative content aware decoder.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation.
Detailed Description
Hereinafter, reference is made to embodiments of the present disclosure. However, it should be understood that the present disclosure is not limited to the specifically described embodiments. Rather, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the present disclosure. Furthermore, although embodiments of the present disclosure may achieve advantages over other possible solutions and/or over the prior art, whether a particular advantage is achieved by a given embodiment is not a limitation of the present disclosure. Thus, the following aspects, features, embodiments and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, references to "the present disclosure" should not be construed as an generalization of any inventive subject matter disclosed herein and should not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim.
Referring to FIG. 1, an arrangement 100 of a host device 106 and a storage device 102 is shown. Data may be sent and received through an interface 104 between a host device 106 and a storage device 102. The interface 104 is configured to interface with the host device 106, i.e., accept data and/or command requests provided by the host device 106 for processing by the storage device 102. The decoder 120 resides in a storage device for decoding data as needed. A memory controller 108 is provided in the memory device 102 to control a single memory device 110A or multiple memory devices 110N. In the illustrated embodiment, the storage device 102 may be a single or multiple SSDs for storing information.
Aspects of the present disclosure use the elements/components described in fig. 1 to implement execution of several methods to utilize data in a decoder, which will enable an increase in correction capability and a decrease in decoding delay and power consumption. These methods will read data with higher accuracy than conventional methods. Aspects of the present disclosure may use structures present in the data itself to achieve more consistent decoding.
In the aspect, the data is scanned using a sliding window method, wherein the estimation of "0" and "1" data values is performed in a window. This data can then be used to more accurately adjust the decoder.
In many cases, data written to a flash memory (such as a NAND-based SSD) has a structure thereto. As described above, the memory devices 110A-110N may be based on flash memory (NAND) technology. The structure may be an attribute of the user source (e.g., if the saved data is text of origin). In other embodiments, the structure may be an attribute of the NAND usage (e.g., a table written by firmware, data filled with zeros, etc.). In either case, finding such structures and considering such structures in the decoding process may result in significantly higher correction capabilities than attempting to perform error correction without using such structures.
Aspects of the present disclosure describe a set of content aware methods for decoding data read from NAND. In the described method, the data is used to determine structures or patterns within the data, which ultimately provides better decoding capabilities.
Integrating content information into error code correction decoder
To ensure high reliability of a memory device, such as an SSD, data written to the memory device is protected with an ECC code. For example, a typical ECC code may be a low density parity check code with a soft decoder.
In a soft decoder 120 such as that disclosed in fig. 1, a metric may be used to describe the probability of data being read as a "1" value or a "0" value. The metric is defined as a Log Likelihood Ratio (LLR) metric.
Considering that the data read is a vector of y, the LLR value of bit i is calculated according to the following formula:
in the definition provided above, the LLR_Channel portion of the LLR metrics is based on statistical information. The llr_channel portion may be predefined by several methods. The statistical model may be used in the predefining. In other embodiments, the predefining may be based on experimentation. In still other embodiments, the predetennination may be calculated online based on a channel estimation method.
The LLR_Source portion of the LLR metrics provided above may be based onStatistical information of the source. In the absence of prior knowledge, it can be assumed that the source data is uniformly distributed (Pr { bit i =0}=Pr{bit i =1 } =0.5), which means that the llr_source part is equal to zero. In other embodiments, the source data may be distributed in a non-uniform manner. Since scrambling stored data in a storage system is a common operation, it is assumed that even distribution is effective. LLR_Source can be calculated because the encoded data can be descrambled, thereby exposing the structure and statistics of the original data.
Knowledge of the underlying statistics of the data may help the decoder obtain correct results when decoding the data, as compared to conventional methods that do not use such methods. Exemplary embodiments of the present disclosure will be described such that understanding the underlying statistics facilitates the decoder 120 to produce the correct decoded data. Referring to fig. 2, a text file 200 in ASCII format is shown. Each character in ASCII is a byte 202 and all the most common characters occupy only the seven (7) least significant bits of the byte. If the ECC decoder understands, for example, that the data is shaped (structured) as shown in fig. 2, the ECC decoder can change the decoding parameters to reflect the data shape (change the probability of 1/0 distribution in the most significant bits of each byte) to improve performance and correction capability. A sliding window 204 is provided for analyzing the significance in the vertical direction.
Obtaining data patterns
As described above, when decoding data, it is helpful to know that the data has underlying statistics and data patterns. One major problem with this technique is that the data may incorporate noise and the data pattern or statistics may be corrupted during decoding. Two methods may be used to mitigate the effects of noise and/or data corruption.
Method 1 for reducing noise and/or data corruption
A first approach to mitigate the effects of noise and/or data corruption is to learn and store data during the encoding process, where a clean version of all the data passes through the encoder. Statistics of the data may be collected during the encoding process and saved for future use. When decoding, the statistical information may be retrieved according to the storage manner of the statistical information and fed to the decoder together with the data read from the NAND.
Since the SSD has a large capacity, data holding/storage can be performed during the encoding process without adverse effects. A more space sensitive device may hold such statistics for groups of pages or even whole segments, e.g. an entire mapping table converting physical to logical locations has a very specific structure that uses several bytes of memory, which may describe gigabytes of data, thereby significantly speeding up the latency of converted addresses.
Method 2 for reducing noise and/or data corruption
Another approach is to use iterative estimation of the data. Iterations between data estimation and decoding may be performed, where each decoding iteration corrects more bits and helps to improve the data estimation, which in turn improves the decoding result. Thus, there is a positive feedback process-because the decoder "clears" more errors, the data source statistics can be estimated more accurately, which in turn allows the decoder to detect more errors.
Aspects of the present disclosure show how several forms of statistics and labels representing data can be obtained in a manner that is less sensitive to noise from the NAND.
Further, aspects of the present disclosure describe several methods of using data to increase correction capability, reduce decoding delay, and power consumption, generally and as compared to conventional techniques.
Example method of Using data patterns
Four exemplary embodiments are disclosed herein, each describing a different method of using data patterns and statistics during a decoding operation. Each method is unique and will be superior to conventional methods given some type of structure, as discussed in each embodiment.
LLR tuner
As shown in fig. 2, for a text file, the most significant bit in each byte is always zero. Thus, in general, sometimes gains in decoding accuracy are obtained by dividing the data to be evaluated into smaller than whole parts, and statistical information of bits is obtained from their positions within the smaller data parts or blocks.
In one embodiment, the data is divided into data blocks of size K, and then for each index within a K tuple (0..k-1), the probability that the bit in that location is a "0" and the probability that the bit is a "1" are calculated. This calculation is done by running on all K-tuples and counting the number of zeros/ones in the index j within the K-tuple.
Let the length of the data be N and b i Is the ith bit of data, then for each K tuple, the LLR source for the jth bit within the K tuple is calculated according to the following formula:
for the example presented in fig. 2, K is equal to 8 (for dividing the data into bytes), and LLR 7 Is a high positive value indicating that the eighth bit in each byte has a high probability of zero.
The learning of the statistical information may be done globally for the entire data volume. Additional methods may divide the data into different regions and for each region, the statistics will learn locally.
Another embodiment is to use a sliding window approach, wherein the source data statistics used to calculate the source LLRs for the next data block are based on the current window. In yet another embodiment, another option is to use a weighted window, giving more weight to the most recent process and less weight to the less recent process, so that the less recent process is forgotten gradually and the more recent process is provided with more importance during decoding.
Symbol tuner
In this exemplary embodiment, the bits in the data are known to be correlated. For example, in a text file, bits are organized in bytes, where each byte represents a character. In the example of text files, the most common characters are alphanumeric, space, and punctuation, with the remaining characters being less common. This indicates that bits from the same byte have statistical correlation and knowing that part of the bits within a byte increases the reliability of other bits within the same byte.
The data may be divided into groups such that all bits in a group have statistical correlation. Each group is considered a symbol.
An LDPC decoder named ICAD ("iterative content aware decoder") is provided in the described arrangement, where the LDPC decoder utilizes connections between bits from the same symbol. The new decoder is described via the short code example in fig. 3. A check node 308 is provided to check the value in each of the bit nodes to ensure that the value therein is accurate. Check nodes 308 are provided to maintain parity check constraints on codeword bits and, based on these constraints, the bit nodes improve overall reliability.
The difference between the ICAD 300 and the original LDPC decoder is that the ICAD has additional nodes called "symbol nodes" or "content nodes". In the example of fig. 3, the symbols are groups of 8 bits. During the decoding process performed by the ICAD 300, each symbol node 302, 304 sends a message from that symbol 302, 304 to each bit node Pv1-16 that represents the probability of a bit being a "0" or "1" based on information from other bits of the same symbol and statistical information reflecting the probability of each symbol. Although fig. 3 depicts a connection structure in which each symbol node 302 and 304 sends a message to a set of adjacent bit nodes, the connection to the bit nodes is not limited to the particular embodiments disclosed, as further described in fig. 8-14.
As described above, there are several techniques to obtain the probability of each symbol occurring. One of these techniques is provided in fig. 3A. In this exemplary embodiment, the message from the symbol node s to the bit node v may be as follows:
wherein:
B t i -a set of all symbols of length k such that the i-th bit is equal to t (t=0 or 1).
B in the formula refers to a specific symbol value.
b j -is the value of the j-th bit in symbol b.
{ l … k } \i-refers to the set of all indexes between 1 and k but without index i.
L j -bit node v j LLR of (C)
Pr { b } terms in the formula are obtained from statistical information reflecting the probability of each symbol.
For the decoding process, the message from the bit node to the symbol node may be represented in other forms. As a non-limiting example:
message from bit node u to symbol node s
Message from bit node v to check node c
Message from check node c to bit node v
Wherein:
N LiR(i) left/right neighborhood of node i in the graph.
And->Operation in the domain is in the group 0,1 xr + Upper completion of
These probabilities can be learned during the encoding process to obtain data without errors. By scanning the data and counting the occurrence of each symbol in the data, probabilities are calculated and stored as additional information for future reading of these data.
These probabilities can be known from the read data itself. The number of errors is low enough to obtain these probabilities with high accuracy.
These probabilities can also be estimated during the decoding process. Each bit keeps its probability at either a "0" or a "1" at each stage of decoding. In view of these probabilities, the probability of occurrence of each symbol can be obtained.
Multi-patch technology
This embodiment describes a non-native approach that does not employ any local statistics. Instead, the similarity of patches in the data is assumed, and if such similarity exists, the method calculates a priori probabilities of bits relative to similar patches, or flips bits in a coarser way.
As shown in fig. 4, the process is iterative and switches between estimating bits and decoding according to the new dictionary.
The method begins by creating a dictionary of all words of a particular length that appear a sufficient number of times in the data source. Each bit is estimated using a sliding window method, where the bit neighborhood is compared to a dictionary and a bit value\correction term is calculated from similar patches.
Several metrics may be used to estimate the bit value, including the KNN method, where each bit obtains a correction term based on the probability of 1/0 in the middle element of the K nearest patches. The probability of each patch occurrence is given a weight, where K can vary from 1 to the size of the dictionary.
After all bits are estimated, a decoding process is performed. If decoding fails, the process may be repeated using the data estimates given by the previous iterations to create a new dictionary and begin a new estimate. The process may also be repeated for various window sizes.
Referring to fig. 4, a method 400 is provided such that multiple patches are available for data pattern use. Method 400 begins at 402, where algorithm parameters are set, including dictionary length, word k iteration, and Max ITER per length. The method proceeds from 402 to 404, where a dictionary of patches of length K that frequently occur in the data source is constructed. The threshold amount may be used to establish a minimum number of occurrences that may be scheduled. The method then proceeds to 406 where a query is made as to whether the dictionary is empty. If the dictionary is empty, a decoding failure is encountered at 408 and a flag or notification may be made to the system and/or user. If the dictionary is not empty, then the data in memory is evaluated at 410, where distance and probability metrics (patches) may be used to change or flip the corresponding bits. Another query is performed at 412 to identify whether the decoding was successful. If the decoding is successful, the method ends at 414. If the decoding is unsuccessful, the method proceeds to 416, where the data source is updated based on the decoded output and any previous iterations. The method then proceeds to 418 where a check is performed for the maximum number of iterations. If the maximum number of iterations has been achieved at 418, the method proceeds to 420 where the value K is amplified and the method returns to 404. If the maximum ITER is not reached at 418, the method returns to 404 without amplifying K.
Data interpolation technique
In some cases, the data may have a local pattern where there are different patterns for several bytes, but when looking at the overall statistics they will average or have less different impact. For example, a "0" or "1" pad at the end of the data (either at the original data or added by firmware), or file system data where addressing slowly increases and may resemble a linear function.
By looking at the neighborhood of each bit, the value of the data can be estimated. Using a sliding window approach for each intermediate element of the window, the neighborhood values may be fitted for a predefined function. In another embodiment, the neighborhood values may be fitted for a set of predefined functions. If the neighbors are similar, a correction term/bit value for the intermediate element may be calculated. Processing on all bits of data creates a complete data estimate. The process may be performed on the data once. In other embodiments, the process may be repeated several times and over several neighborhood sizes.
In any step of the estimation process, a decoding process may be attempted, in which case the decoder output may be used to refine the estimation of the data, iterating between decoding and estimation until a successful decoding or timeout.
Referring to fig. 5, a method 500 of using data interpolation is provided for data pattern use. The method begins at 502, where algorithm parameters, such as the length K of the dictionary words and the iteration Max_ITER per length, are set. The method then proceeds to 504, where a correction term may be established for each bit using the local pattern for the data and comparing each piece of data to the local pattern. The method then proceeds to 506 where a query is run to determine if decoding has been successfully performed. If the decoding has been successfully performed, the method proceeds to end at 508. If the decoding has not been successful, the method proceeds to 510, where the data source is updated based on the previous iteration and the decoded output. The method then proceeds to 512, where a check is performed for the maximum number of iterations. If the maximum number of iterations has been achieved at 512, the method proceeds to 514 where the value K is amplified and the method returns to 504. If the maximum ITER is not reached at 512, the method returns to 504 without amplifying K.
Simulation results
The following figures illustrate improvements in the use of the present invention. Fig. 6 illustrates an improvement in correction capability, where a graph 600 has a relationship of success rate versus number of bit flips in a data block. The X-axis 602 corresponds to the number of bit flips in a data block and the Y-axis 604 corresponds to the decoding success rate from 128 data blocks read sequentially. The "lower" curve 606 corresponds to the default decoder without any knowledge of the underlying statistics. The "upper" curve 608 utilizes the infrastructure and corresponds to a content-aware decoding method in which the underlying statistics are estimated in an iterative manner.
In this example, the content aware method can handle twice the amount of bit rate that a conventional decoder can handle. In the case where the default decoder cannot decode at all, the enhancement decoder still maintains nearly 100% successful decoding.
Referring to fig. 7, there is also an improvement in decoding delay. Fig. 7 provides a graph 700 of decoding time versus the number of flipped bits in a block of data. The X-axis 702 corresponds to the number of bit flips in a data block and the Y-axis 704 corresponds to the average decoding delay from 128 data blocks read sequentially.
The "upper" curve 706 corresponds to the default decoder without any underlying statistics being involved. The "lower" curve 708 corresponds to the case of the underlying statistics of the decoder usage data. As can be seen from fig. 7, even in the region where both the default decoder and the enhancement decoder decode with 100% success rate, the enhancement decoder converges with a lower delay.
Bit connection structure in memory device
In the above-described embodiments, the storage device may include an ICAD configured to connect a set of adjacent bits to a content node (or symbol node). However, ICAD is not limited to a single implementation where neighboring bits are connected to a content node. Instead, the ICAD may be dynamic such that the connection structure resulting from the implementation of the dynamic ICAD is connected to the content node in non-adjacent bits or in unequal numbers of bits. In some cases, with each iteration, the dynamic ICAD may determine a new pattern in the data and rearrange the grouping of bits connected to each content node accordingly, such that each new iteration may improve the overall performance of the storage device.
FIG. 8 illustrates an example graph 800 of dynamic ICADs in a storage device configured to connect non-adjacent bits to content nodes (or symbol nodes). As shown, the ICAD will not be adjacent bit node 806 (e.g., P v1 -R v16 Shown) are dynamically connected to content nodes 802 and 804. Further, as depicted in FIG. 3, a check node 808 is provided.
In one embodiment, non-adjacent bits may be grouped by using a set of content node structures. For example, bits within the set of content node structures may be arranged such that non-adjacent bits connect to content nodes in the set of content node structures. The connection structure of the non-adjacent bit node 806 to the content nodes 802 and 804 may be based on dynamic ICAD determining patterns in the data and grouping the data.
There are various methods for grouping duplicate data. For example, one such method is frequency calculation. In such instances, a frequency transform similar to a Fast Fourier Transform (FFT) may estimate the repetition in the data and group such repetition of the data by determining the frequency components of the data. In some cases, the frequency components may match the basis functions. Continuing with this example, when determining the dominant frequency, the data may be converted such that the data packet may be based on a complete cycle of the data or the location of the data in the cycle.
Another method for grouping duplicate data is based on an autocorrelation function that can determine the data correlation when such data is shifted by a version (or factor) of the data itself. Implementing the autocorrelation function includes determining a maximum point and a highest correlation point. The maximum point in the autocorrelation function is the 0 shift of the data. The highest correlation point is the second peak of the autocorrelation function after shifting the data based on the factor. The offset (or difference) between the maximum point and the highest correlation point is the amount of bytes each bit packet may need to be separated by in order to produce a good symbol for connecting to a bit.
An example graph resulting from a specific implementation of the autocorrelation function is shown in fig. 9. The example graph 900 in fig. 9 has an X-axis representing a shift 902 of data and a Y-axis representing unbiased correlation 904. The example graph 900 further illustrates that the data 906 with the maximum point 908 is a 0 shift of data, while the data 910 with the highest correlation point 912 is a shift of data by a factor or data version. The difference between the maximum point 908 and the highest correlation point 912 is 64, as shown at 914, which means that because the offset is 64 bytes, the bit packets should be separated by at least 64 bytes to get a good content node to connect with the bits in the storage device.
In another example, power spectral density analysis may determine duplicate data for grouping bits. In some cases, the power spectral density function may be described as a combination of an autocorrelation function and a frequency calculation, which may describe the power of a signal as a function of frequency. In such cases, the power spectral density function may repeat the data packet by determining contributions of all peaks that may occur in the same shift when implementing the auto-correction function.
Another method of determining data packets is based on how the natural data frequency of the data is stored. For example, the data may be stored in 8-bit, 16-bit, 32-bit, 64-bit, or 128-bit groups. A data packet may be created based on the Most Significant Bit (MSB) (and reordered with each iteration of the dynamic ICAD) and the packet repeated to the Least Significant Bit (LSB).
While exemplary methods for determining patterns in data are described, more methods for determining patterns of connection bits and content nodes are contemplated.
FIG. 10 illustrates an example graph 1000 of dynamic ICADs in a storage device configured to connect different numbers of bits to a content node. For example, as shown in the example chart 1000, a first number of bits 1006 (P v1 、P v2 、P v4 And P v8 ) Is connected to content node 1002. Continuing with the example, a second number of bits 1006 (P v3 、P v5 、P v6 、P v7 、P v9 、P v10 、P v11 、Pv 12 、P v13 、P v14 、P v15 And P v16 ) Is connected to content node 1004. Furthermore, as depicted in fig. 3, a check node 1008 is provided.
The connection structure between content nodes 1002 and 1004 and bit 1006 is based on patterns in the data determined by one or more of the methods described above. Further, the number of bits 1006 connected to content nodes 1002 and 1004 may vary in each iteration of a dynamic ICAD (not shown). For example, during a second iteration, the dynamic ICAD may determine that the following bits 1006 are connected to the content node 1002: p (P) v1 、P v2 、P v4 、P v8 、Pv 12 、P v13 And P v14 . Continuing with the example, a second number of bits 1006Connected to content node 1004: p (P) v3 、P v5 、P v6 、P v7 、P v9 、P v10 、P v11 、P v15 And P v16
FIG. 11 illustrates an example graph 1100 of dynamic ICADs in a storage device configured to connect bits to three different content nodes. In the exemplary depiction, content node 1102 is connected to four bits 1108: p (P) v1 、P v4 、P v7 、P v9 . Further depicted, content node 1104 is also connected to four bits 1108: p (P) v2 、P v6 、P v10 、P v13 . The depicted content node 1106 is connected to eight bits 1108: p (P) v3 、P v5 、P v8 、P v11 、Pv 12 、Pv 14 、Pv 15 、Pv 16 . Furthermore, as depicted in fig. 3, a check node 1110 is provided.
FIG. 12 illustrates an example graph 1200 of dynamic ICADs in a storage device configured to connect a different number of bits to each of three content nodes. The example graph 1200 depicts each of the content nodes 1202, 1204, and 1206 connected to a different number of bits 1208. In some cases, the connection structure may be based on a method of determining patterns in data, as described. Furthermore, as depicted in fig. 3, a check node 1210 is provided.
The described connection structure illustrates dynamic ICADs in packet bits to create a connection structure that utilizes each iteration of the dynamic ICAD to improve the functionality of the storage device. Several embodiments of the connection between the bit and the node are depicted in fig. 8, 10-12, but further embodiments are contemplated and the connection structure shown is for illustrative purposes only.
Simulation results
The following figures illustrate examples of improvements that implement dynamic ICAD to generate connection structures (e.g., connections between content nodes and non-adjacent bits and different numbers of bits connected to content nodes). Fig. 13 illustrates an improvement in correction capability, where the graph 1300 (update of the graph 600 in fig. 6) has a relationship of success rate versus number of bit flips in a data block. The X-axis 1302 corresponds to the number of bit flips in a data block and the Y-axis 1304 corresponds to the decoding success rate. Fig. 13 shows the improvement of the correction capability of dynamic ICAD compared to ICAD and conventional decoders. For example, as shown in graph 1300, dynamic ICAD curve 1310 is superior to conventional decoder curve 1306 and ICAD curve 1308.
Similarly, fig. 14 shows that decoding delay is improved by implementing dynamic ICAD. Fig. 14 provides a graph 1400 (an update of the graph 700 shown in fig. 7) of decoding time versus the number of flip bits in a data block when implementing dynamic ICAD. The X-axis 1402 corresponds to the number of bit flips in a data block and the Y-axis 1404 corresponds to the average decoding delay from 128 data blocks read sequentially. Fig. 14 shows improvement of decoding delay of dynamic ICAD compared to ICAD and conventional decoders. For example, as shown in graph 1400, the conventional decoder 1406 and ICAD1408 execute with a higher average decoding delay than the dynamic ICAD 1410. Thus, dynamic ICAD converges much faster and may require less time to decode.
In one non-limiting embodiment, a method is disclosed that includes: obtaining data from a memory; estimating a probability of a data value of the obtained data based on at least one of a source log-likelihood ratio and a channel log-likelihood ratio, wherein each bit in the obtained data has an associated log-likelihood ratio; determining at least one data pattern parameter of the data based on estimating a probability of the data value, wherein the at least one data pattern parameter comprises information about each bit in the obtained data; and performing a decoding process using the at least one data pattern parameter to determine a decoded data set.
In another non-limiting embodiment, the method may be performed wherein the obtained data is obtained from a sliding window sample.
In yet another non-limiting embodiment, the method may be performed wherein the probability of the data value is estimated for a plurality of bits simultaneously.
In another non-limiting embodiment, the method may further comprise performing an iterative estimation process on the data prior to the decoding process.
In another embodiment, a method is disclosed that includes: obtaining data from the memory, applying the data to one or more symbol nodes of the network; estimating a probability of a data value of the obtained data based on at least one of a source log-likelihood ratio, a channel log-likelihood ratio, and a list of symbol probability distributions, wherein each bit of the obtained data has an associated log-likelihood ratio and is associated with a value in each bit position, and wherein the list of symbol probability distributions is based on an occurrence probability of each symbol; determining at least one data pattern parameter for the data based on the estimate, wherein the at least one data pattern parameter includes information about each bit in the obtained data; and performing a decoding process using the at least one data pattern parameter to determine a decoded data set.
In another exemplary embodiment, the method may be performed wherein the probability of estimating the data value is determined during the encoding process.
In further exemplary embodiments, the method may be performed wherein estimating the probability of the data value is performed by scanning the data and counting the occurrence of each symbol in the data.
In further exemplary embodiments, the method may be performed wherein the probability of the data value is estimated during a decoding process to determine the probability of occurrence of each symbol.
In further exemplary embodiments, a method may be performed that includes obtaining data from a memory, the data requiring decoding; comparing the data from the memory to a patch list and enhancing the data from the memory to produce enhanced data when comparing the data from the memory to the patch list indicates a match of the data to a particular patch; and decoding the enhanced data.
In further exemplary embodiments, the method may be performed wherein the data is a sample of bits of a data block.
In further exemplary embodiments, the method may be performed wherein the patch list includes a distance metric.
In another exemplary embodiment, the method may be performed wherein the patch list includes a probability metric.
In another non-limiting embodiment, the method may be performed wherein the patch list is predetermined.
In another non-limiting embodiment, the method may further include checking the decoding of the enhancement data.
In another non-limiting embodiment, the method may further comprise updating the data in the memory when the decoding of the enhancement data is unsuccessful.
In another non-limiting embodiment, the method may further comprise: checking the number of iterations performed when updating the data in the memory, and ending the method when the number of iterations performed is equal to the allowed maximum number of iterations, and performing another iteration of comparing the data with the patch list when the number of iterations performed is less than the maximum number of iterations.
In another exemplary embodiment, a method is disclosed that includes: obtaining data of a memory block, wherein the data is a sample of bits of a data block of a given length, each of the bits having a bit value; comparing the snapshot with a predetermined data pattern; estimating at least one bit value of the data based on the comparison; and adjusting the bit value based on the estimate.
In another exemplary embodiment, the method may be performed wherein the predetermined data pattern is a mathematical linear function.
In another exemplary embodiment, the method may further include extending the amount obtained by each execution of the method.
In another exemplary embodiment, the method may be performed wherein adjusting the bit value includes generating and applying a correction term to the bit value.
In yet another embodiment, the method may be performed multiple times using snapshots of different data blocks of the memory block.
In another exemplary embodiment, a device is disclosed that includes at least one memory device and a controller coupled to the at least one memory device, the controller configured to provide a content aware decoding process configured to reduce a range of the decoding process, wherein the content aware decoding process samples a data block in one or more memory devices to estimate bit values in the data block based on log likelihood ratios, and wherein the log likelihood ratios are associated with a number of ones and zeros in each bit position.
In another non-limiting embodiment, the apparatus further comprises a controller, the controller being further configured to have a predetermined patch list.
In another non-limiting embodiment, the device may be configured wherein the content aware decoding process resides in firmware in the controller.
In another non-limiting embodiment, an arrangement is disclosed that includes means for retrieving data from a memory; means for estimating a probability of a data value of the obtained data based on at least one of a source log-likelihood ratio and a channel log-likelihood ratio, wherein each bit in the obtained data has an associated log-likelihood ratio; means for determining at least one data pattern parameter of the data based on an estimate of a probability of the data value, wherein the at least one data pattern parameter comprises information about each bit in the obtained data; and means for performing a decoding process using the at least one data pattern parameter to determine a decoded data set.
In another non-limiting embodiment, the arrangement may be configured wherein the means for obtaining data obtains data from one of a memory and a solid state drive.
In another exemplary embodiment, a storage device is disclosed that includes an interface, at least one memory device, a controller coupled to the at least one memory device, and a decoder configured to connect non-adjacent bits to a content node based on a content aware decoding process.
In another non-limiting embodiment, the storage device may be configured wherein the connection of non-adjacent bits is based on a determination of a pattern in the data.
In another non-limiting embodiment, the storage device may be configured wherein each iteration of the content-aware decoding process connects a different set of non-adjacent bits to the content node.
In another non-limiting embodiment, the storage device may be configured wherein connecting non-adjacent bits to the content node includes frequency calculation of duplicate data packets.
In another non-limiting embodiment, the storage device may be configured wherein the frequency calculation comprises a Fast Fourier Transform (FFT).
In another non-limiting embodiment, the storage device may be configured wherein connecting non-adjacent bits to the content node includes autocorrelation.
In another non-limiting embodiment, the storage device may be configured wherein connecting non-adjacent bits to the content node includes spectral analysis of autocorrelation.
In another non-limiting embodiment, the memory device may be configured wherein concatenating non-adjacent bits includes reordering data based on a natural data frequency.
In another non-limiting embodiment, the storage device may be configured wherein the concatenated non-adjacent bits are based on an arrangement of a set of bits within a set of content node structures.
In another embodiment, a storage device is disclosed that includes an interface, at least one memory device, a controller coupled to the at least one memory device, and a decoder configured to connect a first number of bits to a first content node and a second number of bits to a second content node based on a content aware decoding process, wherein the first number of bits and the second number of bits are different.
In another exemplary embodiment, the storage device may be configured wherein the first content node is connected to a different number of bits in each iteration of the content aware decoding process.
In another exemplary embodiment, the storage device may be configured wherein the second content node is connected to a different number of bits in each iteration of the content aware decoding process.
In another exemplary embodiment, the storage device may be configured wherein the number of bits connected to the first content node and the second content node is different for each iteration of the content-aware decoding process.
In another exemplary embodiment, the storage device may be configured wherein the decoder is configured to connect a third number of bits to the third content node, wherein the third number of bits is the same as the first number of bits or the second number of bits.
In another exemplary embodiment, the storage device may be configured wherein the decoder is configured to connect a third number of bits to the third content node, wherein the third number of bits is different from the first number of bits and the second number of bits.
In another exemplary embodiment, the storage device may be configured wherein the connection is based on at least one of frequency calculation, autocorrelation, spectral analysis of autocorrelation, natural data frequency, and an arrangement of bits within a set of content node structures.
In further exemplary embodiments, a storage device is disclosed that includes means for determining a pattern in data and means for connecting bits to a set of content nodes based on the pattern in data, wherein the bits are non-adjacent, unequal in number, or a combination thereof.
In another exemplary embodiment, the storage device may be configured wherein the means for determining a pattern in the data determines the pattern from the decoder.
In another exemplary embodiment, the storage device may be configured wherein the means for concatenating bits to a set of content nodes concatenates bits from the decoder.
In another exemplary embodiment, the storage device may be configured wherein the means for connecting bits to a set of content nodes comprises at least one of means for connecting non-adjacent bits to each content node and means for connecting a different number of bits to each content node.
While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (19)

1. A storage device, the storage device comprising:
an interface;
at least one memory device;
a controller coupled to the at least one memory device; and
a decoder configured to connect a first number of non-adjacent bits to a first content node based on a content aware decoding process and a second number of non-adjacent bits to a second content node based on the content aware decoding process, wherein the first number of non-adjacent bits and the second number of non-adjacent bits are different,
wherein each content node is configured to send a message from a respective symbol to each bit node during decoding, the message representing the probability of the bit being a "0" or "1" based on information from other bits of the same symbol and statistical information reflecting the probability of each symbol.
2. The storage device of claim 1, wherein the connection of the non-adjacent bits is based on a determination of a pattern in data.
3. The storage device of claim 1, wherein each iteration of the content aware decoding process connects a different set of non-adjacent bits to the content node.
4. The storage device of claim 1, wherein connecting non-adjacent bits to the content node comprises frequency calculation of duplicate data packets.
5. The storage device of claim 4, wherein the frequency calculation comprises a Fast Fourier Transform (FFT).
6. The memory device of claim 1, wherein concatenating non-adjacent bits comprises reordering data based on a natural data frequency.
7. The storage device of claim 1, wherein connecting non-adjacent bits is based on arranging a set of bits within a set of content node structures.
8. A storage device, the storage device comprising:
an interface;
at least one memory device;
a controller coupled to the at least one memory device; and
a decoder configured to connect non-adjacent bits to a content node based on a content aware decoding process, wherein connecting non-adjacent bits to the content node comprises autocorrelation,
wherein each content node is configured to send a message from a respective symbol to each bit node during decoding, the message representing the probability of the bit being a "0" or "1" based on information from other bits of the same symbol and statistical information reflecting the probability of each symbol.
9. The storage device of claim 8, wherein connecting non-adjacent bits to the content node comprises spectral analysis of the autocorrelation.
10. A storage device, the storage device comprising:
an interface;
at least one memory device;
a controller coupled to the at least one memory device; and
a decoder configured to make the following connections based on a content aware decoding process:
connecting a first number of bits to a first content node, an
A second number of bits is connected to a second content node,
wherein the first number of bits and the second number of bits are different,
wherein each content node is configured to send a message from a respective symbol to each bit node during decoding, the message representing the probability of the bit being a "0" or "1" based on information from other bits of the same symbol and statistical information reflecting the probability of each symbol.
11. The storage device of claim 10, wherein the first content node is connected to a different number of bits in each iteration of the content aware decoding process.
12. The storage device of claim 11, wherein the second content node is connected to a different number of bits in each iteration of the content aware decoding process.
13. The storage device of claim 12, wherein the number of bits connected to the first content node and the second content node is different for each iteration of the content-aware decoding process.
14. The storage device of claim 10, wherein the decoder is configured to connect a third number of bits to a third content node, wherein the third number of bits is the same as the first number of bits or the second number of bits.
15. The storage device of claim 10, wherein the decoder is configured to connect a third number of bits to a third content node, wherein the third number of bits is different from the first number of bits and the second number of bits.
16. The storage device of claim 10, wherein the connection is based on at least one of:
calculating frequency;
autocorrelation;
spectral analysis of the autocorrelation;
natural data frequency; and
bits within a set of content node structures are arranged.
17. A storage device, the storage device comprising:
means for determining a pattern in the data; and is also provided with
Means for connecting bits to a set of content nodes based on the pattern in the data, comprising at least one of:
Means for concatenating non-adjacent bits to each content node; and
means for connecting a different number of bits to each content node,
wherein the bit is any one of:
non-adjacent; or alternatively
The number is not equal; or alternatively
A combination of these,
wherein each content node is configured to send a message from a respective symbol to each bit node during decoding, the message representing the probability of the bit being a "0" or "1" based on information from other bits of the same symbol and statistical information reflecting the probability of each symbol.
18. The storage device of claim 17, wherein means for determining a pattern in data determines the pattern from a decoder.
19. The storage device of claim 17, wherein the means for concatenating bits to a set of content nodes concatenates bits from a decoder.
CN201980080329.4A 2019-06-25 2019-12-18 Data driven ICAD graphics generation Active CN113168360B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US16/452,466 US10862512B2 (en) 2018-09-20 2019-06-25 Data driven ICAD graph generation
US16/452,466 2019-06-25
PCT/US2019/067069 WO2020263325A1 (en) 2019-06-25 2019-12-18 Data driven icad graph generation

Publications (2)

Publication Number Publication Date
CN113168360A CN113168360A (en) 2021-07-23
CN113168360B true CN113168360B (en) 2023-09-19

Family

ID=74061022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980080329.4A Active CN113168360B (en) 2019-06-25 2019-12-18 Data driven ICAD graphics generation

Country Status (3)

Country Link
CN (1) CN113168360B (en)
DE (1) DE112019005507T5 (en)
WO (1) WO2020263325A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1406392A1 (en) * 2002-10-04 2004-04-07 Broadcom Corporation Variable modulation with LDPC (low density parity check) coding
US8286048B1 (en) * 2008-12-30 2012-10-09 Qualcomm Atheros, Inc. Dynamically scaled LLR for an LDPC decoder
CN107005251A (en) * 2014-11-19 2017-08-01 领特投资两合有限公司 The LDPC decodings of dynamic adjustment with finite accuracy and iterations
CN108268338A (en) * 2016-12-30 2018-07-10 西部数据技术公司 Gradually reduce the variable node memory of size
CN108399109A (en) * 2017-02-07 2018-08-14 西部数据技术公司 soft decoding scheduling
CN108432167A (en) * 2016-01-14 2018-08-21 英特尔Ip公司 The Error Correction of Coding of efficient information
CN109560818A (en) * 2017-09-25 2019-04-02 爱思开海力士有限公司 Improved minimum and decoding for LDPC code

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6928590B2 (en) * 2001-12-14 2005-08-09 Matrix Semiconductor, Inc. Memory device and method for storing bits in non-adjacent storage locations in a memory array
US7216283B2 (en) * 2003-06-13 2007-05-08 Broadcom Corporation Iterative metric updating when decoding LDPC (low density parity check) coded signals and LDPC coded modulation signals
US7159170B2 (en) * 2003-06-13 2007-01-02 Broadcom Corporation LDPC (low density parity check) coded modulation symbol decoding
US7436902B2 (en) * 2003-06-13 2008-10-14 Broadcom Corporation Multi-dimensional space Gray code maps for multi-dimensional phase modulation as applied to LDPC (Low Density Parity Check) coded modulation
US7281192B2 (en) * 2004-04-05 2007-10-09 Broadcom Corporation LDPC (Low Density Parity Check) coded signal decoding using parallel and simultaneous bit node and check node processing
WO2008142683A2 (en) * 2007-05-21 2008-11-27 Ramot At Tel Aviv University Ltd. Memory-efficient ldpc decoding
US8374026B2 (en) * 2009-01-30 2013-02-12 Sandisk Il Ltd. System and method of reading data using a reliability measure
US9276610B2 (en) * 2014-01-27 2016-03-01 Tensorcom, Inc. Method and apparatus of a fully-pipelined layered LDPC decoder
US10382067B2 (en) * 2017-06-08 2019-08-13 Western Digital Technologies, Inc. Parameterized iterative message passing decoder
US10447301B2 (en) * 2017-09-13 2019-10-15 Toshiba Memory Corporation Optimal LDPC bit flip decision

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1406392A1 (en) * 2002-10-04 2004-04-07 Broadcom Corporation Variable modulation with LDPC (low density parity check) coding
US8286048B1 (en) * 2008-12-30 2012-10-09 Qualcomm Atheros, Inc. Dynamically scaled LLR for an LDPC decoder
CN107005251A (en) * 2014-11-19 2017-08-01 领特投资两合有限公司 The LDPC decodings of dynamic adjustment with finite accuracy and iterations
CN108432167A (en) * 2016-01-14 2018-08-21 英特尔Ip公司 The Error Correction of Coding of efficient information
CN108268338A (en) * 2016-12-30 2018-07-10 西部数据技术公司 Gradually reduce the variable node memory of size
CN108399109A (en) * 2017-02-07 2018-08-14 西部数据技术公司 soft decoding scheduling
CN109560818A (en) * 2017-09-25 2019-04-02 爱思开海力士有限公司 Improved minimum and decoding for LDPC code

Also Published As

Publication number Publication date
DE112019005507T5 (en) 2021-09-16
CN113168360A (en) 2021-07-23
WO2020263325A1 (en) 2020-12-30

Similar Documents

Publication Publication Date Title
US11258465B2 (en) Content aware decoding method and system
US11451247B2 (en) Decoding signals by guessing noise
US8406051B2 (en) Iterative demodulation and decoding for multi-page memory architecture
US20180175890A1 (en) Methods and Apparatus for Error Correction Coding Based on Data Compression
TWI460733B (en) Memory controller with low density parity check code decoding capability and relevant memory controlling method
KR20170059874A (en) Method for determining similarity between messages and apparatus for executing de-duplication of similar messages
US20150178151A1 (en) Data storage device decoder and method of operation
CN111756385A (en) Error correction decoder
US10862512B2 (en) Data driven ICAD graph generation
US10326473B2 (en) Symbol-based coding for NAND flash devices
JP2023503588A (en) Content-aware bit-reversal decoder
US9639421B2 (en) Operating method of flash memory system
JP2020046871A (en) Memory system
KR20200033688A (en) Error correction circuit and operating method thereof
JP2022124682A (en) memory system
CN113168360B (en) Data driven ICAD graphics generation
KR102592870B1 (en) Error correction circuit and operating method thereof
US20170161141A1 (en) Method and apparatus for correcting data in multiple ecc blocks of raid memory
CN114078560B (en) Error correction decoding method of NAND flash memory chip, storage medium and SSD device
KR20140125987A (en) Encoder, decoder and semiconductor device including the same
US10879940B2 (en) Decoding with data mapping methods and systems
US11876535B1 (en) Memory controller and method for controlling data in decoding pipeline
US11204831B2 (en) Memory system
US20240061586A1 (en) Memory controller and method for bit flipping of low-density parity-check codes
CN116648860A (en) Decoding method of LDPC (Low Density parity check) code and decoder of LDPC code

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant